00:00:00.001 Started by upstream project "autotest-nightly" build number 4274 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3637 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.094 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.129 Using shallow fetch with depth 1 00:00:00.129 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.129 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.212 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.224 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.236 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.236 > git config core.sparsecheckout # timeout=10 00:00:04.247 > git read-tree -mu HEAD # timeout=10 00:00:04.263 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.280 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.280 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.410 [Pipeline] Start of Pipeline 00:00:04.425 [Pipeline] library 00:00:04.426 Loading library shm_lib@master 00:00:04.427 Library shm_lib@master is cached. Copying from home. 00:00:04.447 [Pipeline] node 00:00:04.462 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.464 [Pipeline] { 00:00:04.475 [Pipeline] catchError 00:00:04.476 [Pipeline] { 00:00:04.489 [Pipeline] wrap 00:00:04.498 [Pipeline] { 00:00:04.507 [Pipeline] stage 00:00:04.509 [Pipeline] { (Prologue) 00:00:04.527 [Pipeline] echo 00:00:04.529 Node: VM-host-SM38 00:00:04.535 [Pipeline] cleanWs 00:00:04.546 [WS-CLEANUP] Deleting project workspace... 00:00:04.546 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.554 [WS-CLEANUP] done 00:00:04.741 [Pipeline] setCustomBuildProperty 00:00:04.831 [Pipeline] httpRequest 00:00:05.159 [Pipeline] echo 00:00:05.161 Sorcerer 10.211.164.20 is alive 00:00:05.169 [Pipeline] retry 00:00:05.170 [Pipeline] { 00:00:05.185 [Pipeline] httpRequest 00:00:05.190 HttpMethod: GET 00:00:05.191 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.191 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.192 Response Code: HTTP/1.1 200 OK 00:00:05.192 Success: Status code 200 is in the accepted range: 200,404 00:00:05.193 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.482 [Pipeline] } 00:00:05.496 [Pipeline] // retry 00:00:05.502 [Pipeline] sh 00:00:05.785 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.801 [Pipeline] httpRequest 00:00:06.149 [Pipeline] echo 00:00:06.150 Sorcerer 10.211.164.20 is alive 00:00:06.157 [Pipeline] retry 00:00:06.159 [Pipeline] { 00:00:06.170 [Pipeline] httpRequest 00:00:06.175 HttpMethod: GET 00:00:06.175 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:06.176 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:06.177 Response Code: HTTP/1.1 200 OK 00:00:06.177 Success: Status code 200 is in the accepted range: 200,404 00:00:06.178 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:24.413 [Pipeline] } 00:00:24.431 [Pipeline] // retry 00:00:24.439 [Pipeline] sh 00:00:24.728 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:27.280 [Pipeline] sh 00:00:27.567 + git -C spdk log --oneline -n5 00:00:27.567 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:27.567 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:27.567 4bcab9fb9 correct kick for CQ full case 00:00:27.567 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:27.567 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:27.588 [Pipeline] writeFile 00:00:27.603 [Pipeline] sh 00:00:27.891 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:27.904 [Pipeline] sh 00:00:28.188 + cat autorun-spdk.conf 00:00:28.189 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.189 SPDK_TEST_NVME=1 00:00:28.189 SPDK_TEST_FTL=1 00:00:28.189 SPDK_TEST_ISAL=1 00:00:28.189 SPDK_RUN_ASAN=1 00:00:28.189 SPDK_RUN_UBSAN=1 00:00:28.189 SPDK_TEST_XNVME=1 00:00:28.189 SPDK_TEST_NVME_FDP=1 00:00:28.189 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.198 RUN_NIGHTLY=1 00:00:28.200 [Pipeline] } 00:00:28.213 [Pipeline] // stage 00:00:28.228 [Pipeline] stage 00:00:28.230 [Pipeline] { (Run VM) 00:00:28.242 [Pipeline] sh 00:00:28.528 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.528 + echo 'Start stage prepare_nvme.sh' 00:00:28.528 Start stage prepare_nvme.sh 00:00:28.528 + [[ -n 8 ]] 00:00:28.528 + disk_prefix=ex8 00:00:28.528 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:28.528 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:28.528 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:28.528 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.528 ++ SPDK_TEST_NVME=1 00:00:28.528 ++ SPDK_TEST_FTL=1 00:00:28.528 ++ SPDK_TEST_ISAL=1 00:00:28.528 ++ SPDK_RUN_ASAN=1 00:00:28.528 ++ SPDK_RUN_UBSAN=1 00:00:28.528 ++ SPDK_TEST_XNVME=1 00:00:28.528 ++ SPDK_TEST_NVME_FDP=1 00:00:28.528 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.528 ++ RUN_NIGHTLY=1 00:00:28.528 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:28.528 + nvme_files=() 00:00:28.528 + declare -A nvme_files 00:00:28.528 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.528 + nvme_files['nvme.img']=5G 00:00:28.528 + nvme_files['nvme-cmb.img']=5G 00:00:28.528 + nvme_files['nvme-multi0.img']=4G 00:00:28.528 + nvme_files['nvme-multi1.img']=4G 00:00:28.528 + nvme_files['nvme-multi2.img']=4G 00:00:28.528 + nvme_files['nvme-openstack.img']=8G 00:00:28.528 + nvme_files['nvme-zns.img']=5G 00:00:28.528 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.528 + (( SPDK_TEST_FTL == 1 )) 00:00:28.528 + nvme_files["nvme-ftl.img"]=6G 00:00:28.528 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.528 + nvme_files["nvme-fdp.img"]=1G 00:00:28.528 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.528 + for nvme in "${!nvme_files[@]}" 00:00:28.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:00:28.528 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.528 + for nvme in "${!nvme_files[@]}" 00:00:28.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:00:28.528 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:28.528 + for nvme in "${!nvme_files[@]}" 00:00:28.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:00:28.528 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.528 + for nvme in "${!nvme_files[@]}" 00:00:28.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:00:28.529 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.529 + for nvme in "${!nvme_files[@]}" 00:00:28.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:00:29.102 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.102 + for nvme in "${!nvme_files[@]}" 00:00:29.102 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:00:29.102 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.364 + for nvme in "${!nvme_files[@]}" 00:00:29.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:00:29.364 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.364 + for nvme in "${!nvme_files[@]}" 00:00:29.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:00:29.364 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:29.364 + for nvme in "${!nvme_files[@]}" 00:00:29.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:00:29.938 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.938 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:00:29.938 + echo 'End stage prepare_nvme.sh' 00:00:29.938 End stage prepare_nvme.sh 00:00:29.953 [Pipeline] sh 00:00:30.240 + DISTRO=fedora39 00:00:30.240 + CPUS=10 00:00:30.240 + RAM=12288 00:00:30.240 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.240 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:30.240 00:00:30.240 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:30.240 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:30.240 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:30.240 HELP=0 00:00:30.240 DRY_RUN=0 00:00:30.240 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:00:30.240 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:30.240 NVME_AUTO_CREATE=0 00:00:30.240 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:00:30.240 NVME_CMB=,,,, 00:00:30.240 NVME_PMR=,,,, 00:00:30.240 NVME_ZNS=,,,, 00:00:30.240 NVME_MS=true,,,, 00:00:30.240 NVME_FDP=,,,on, 00:00:30.240 SPDK_VAGRANT_DISTRO=fedora39 00:00:30.240 SPDK_VAGRANT_VMCPU=10 00:00:30.240 SPDK_VAGRANT_VMRAM=12288 00:00:30.240 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.240 SPDK_VAGRANT_HTTP_PROXY= 00:00:30.240 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.240 SPDK_OPENSTACK_NETWORK=0 00:00:30.240 VAGRANT_PACKAGE_BOX=0 00:00:30.240 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:30.240 FORCE_DISTRO=true 00:00:30.240 VAGRANT_BOX_VERSION= 00:00:30.240 EXTRA_VAGRANTFILES= 00:00:30.240 NIC_MODEL=e1000 00:00:30.240 00:00:30.240 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:30.240 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:32.792 Bringing machine 'default' up with 'libvirt' provider... 00:00:33.053 ==> default: Creating image (snapshot of base box volume). 00:00:33.315 ==> default: Creating domain with the following settings... 00:00:33.315 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731806441_0da9df39c3da7ebad89c 00:00:33.315 ==> default: -- Domain type: kvm 00:00:33.315 ==> default: -- Cpus: 10 00:00:33.315 ==> default: -- Feature: acpi 00:00:33.315 ==> default: -- Feature: apic 00:00:33.315 ==> default: -- Feature: pae 00:00:33.315 ==> default: -- Memory: 12288M 00:00:33.315 ==> default: -- Memory Backing: hugepages: 00:00:33.315 ==> default: -- Management MAC: 00:00:33.315 ==> default: -- Loader: 00:00:33.315 ==> default: -- Nvram: 00:00:33.315 ==> default: -- Base box: spdk/fedora39 00:00:33.315 ==> default: -- Storage pool: default 00:00:33.315 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731806441_0da9df39c3da7ebad89c.img (20G) 00:00:33.315 ==> default: -- Volume Cache: default 00:00:33.315 ==> default: -- Kernel: 00:00:33.315 ==> default: -- Initrd: 00:00:33.315 ==> default: -- Graphics Type: vnc 00:00:33.315 ==> default: -- Graphics Port: -1 00:00:33.315 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.315 ==> default: -- Graphics Password: Not defined 00:00:33.315 ==> default: -- Video Type: cirrus 00:00:33.315 ==> default: -- Video VRAM: 9216 00:00:33.315 ==> default: -- Sound Type: 00:00:33.315 ==> default: -- Keymap: en-us 00:00:33.315 ==> default: -- TPM Path: 00:00:33.315 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.315 ==> default: -- Command line args: 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:33.315 ==> default: -> value=-drive, 00:00:33.315 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:33.315 ==> default: -> value=-drive, 00:00:33.315 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:33.315 ==> default: -> value=-drive, 00:00:33.315 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.315 ==> default: -> value=-drive, 00:00:33.315 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:33.315 ==> default: -> value=-device, 00:00:33.315 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.315 ==> default: -> value=-drive, 00:00:33.315 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:33.316 ==> default: -> value=-device, 00:00:33.316 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.316 ==> default: -> value=-device, 00:00:33.316 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:33.316 ==> default: -> value=-device, 00:00:33.316 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:33.316 ==> default: -> value=-drive, 00:00:33.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:33.316 ==> default: -> value=-device, 00:00:33.316 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.578 ==> default: Creating shared folders metadata... 00:00:33.578 ==> default: Starting domain. 00:00:35.495 ==> default: Waiting for domain to get an IP address... 00:00:53.664 ==> default: Waiting for SSH to become available... 00:00:53.664 ==> default: Configuring and enabling network interfaces... 00:00:56.213 default: SSH address: 192.168.121.75:22 00:00:56.213 default: SSH username: vagrant 00:00:56.213 default: SSH auth method: private key 00:00:58.131 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.277 ==> default: Mounting SSHFS shared folder... 00:01:07.723 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.723 ==> default: Checking Mount.. 00:01:08.688 ==> default: Folder Successfully Mounted! 00:01:08.688 00:01:08.688 SUCCESS! 00:01:08.688 00:01:08.688 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:08.688 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:08.688 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:08.688 00:01:08.699 [Pipeline] } 00:01:08.714 [Pipeline] // stage 00:01:08.723 [Pipeline] dir 00:01:08.724 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:08.725 [Pipeline] { 00:01:08.738 [Pipeline] catchError 00:01:08.740 [Pipeline] { 00:01:08.753 [Pipeline] sh 00:01:09.038 + vagrant ssh-config --host vagrant 00:01:09.038 + sed -ne '/^Host/,$p' 00:01:09.038 + tee ssh_conf 00:01:11.585 Host vagrant 00:01:11.585 HostName 192.168.121.75 00:01:11.585 User vagrant 00:01:11.585 Port 22 00:01:11.585 UserKnownHostsFile /dev/null 00:01:11.585 StrictHostKeyChecking no 00:01:11.585 PasswordAuthentication no 00:01:11.585 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:11.585 IdentitiesOnly yes 00:01:11.585 LogLevel FATAL 00:01:11.585 ForwardAgent yes 00:01:11.585 ForwardX11 yes 00:01:11.585 00:01:11.601 [Pipeline] withEnv 00:01:11.603 [Pipeline] { 00:01:11.617 [Pipeline] sh 00:01:11.900 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:11.900 source /etc/os-release 00:01:11.900 [[ -e /image.version ]] && img=$(< /image.version) 00:01:11.900 # Minimal, systemd-like check. 00:01:11.900 if [[ -e /.dockerenv ]]; then 00:01:11.900 # Clear garbage from the node'\''s name: 00:01:11.900 # agt-er_autotest_547-896 -> autotest_547-896 00:01:11.900 # $HOSTNAME is the actual container id 00:01:11.900 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:11.900 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:11.900 # We can assume this is a mount from a host where container is running, 00:01:11.900 # so fetch its hostname to easily identify the target swarm worker. 00:01:11.900 container="$(< /etc/hostname) ($agent)" 00:01:11.900 else 00:01:11.900 # Fallback 00:01:11.900 container=$agent 00:01:11.900 fi 00:01:11.900 fi 00:01:11.900 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:11.900 ' 00:01:12.172 [Pipeline] } 00:01:12.190 [Pipeline] // withEnv 00:01:12.198 [Pipeline] setCustomBuildProperty 00:01:12.213 [Pipeline] stage 00:01:12.215 [Pipeline] { (Tests) 00:01:12.231 [Pipeline] sh 00:01:12.513 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:12.788 [Pipeline] sh 00:01:13.073 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:13.350 [Pipeline] timeout 00:01:13.350 Timeout set to expire in 50 min 00:01:13.352 [Pipeline] { 00:01:13.365 [Pipeline] sh 00:01:13.649 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:14.221 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:14.235 [Pipeline] sh 00:01:14.521 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:14.799 [Pipeline] sh 00:01:15.083 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:15.363 [Pipeline] sh 00:01:15.648 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:15.908 ++ readlink -f spdk_repo 00:01:15.908 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:15.908 + [[ -n /home/vagrant/spdk_repo ]] 00:01:15.908 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:15.908 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:15.908 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:15.908 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:15.908 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:15.908 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:15.908 + cd /home/vagrant/spdk_repo 00:01:15.908 + source /etc/os-release 00:01:15.908 ++ NAME='Fedora Linux' 00:01:15.908 ++ VERSION='39 (Cloud Edition)' 00:01:15.908 ++ ID=fedora 00:01:15.908 ++ VERSION_ID=39 00:01:15.908 ++ VERSION_CODENAME= 00:01:15.908 ++ PLATFORM_ID=platform:f39 00:01:15.908 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:15.908 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.908 ++ LOGO=fedora-logo-icon 00:01:15.908 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:15.908 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.908 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:15.908 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.908 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.908 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.908 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:15.908 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.908 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:15.908 ++ SUPPORT_END=2024-11-12 00:01:15.908 ++ VARIANT='Cloud Edition' 00:01:15.908 ++ VARIANT_ID=cloud 00:01:15.908 + uname -a 00:01:15.908 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:15.908 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:16.428 Hugepages 00:01:16.428 node hugesize free / total 00:01:16.428 node0 1048576kB 0 / 0 00:01:16.428 node0 2048kB 0 / 0 00:01:16.428 00:01:16.428 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.428 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.428 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.689 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:16.689 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:16.689 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:16.689 + rm -f /tmp/spdk-ld-path 00:01:16.689 + source autorun-spdk.conf 00:01:16.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.689 ++ SPDK_TEST_NVME=1 00:01:16.689 ++ SPDK_TEST_FTL=1 00:01:16.689 ++ SPDK_TEST_ISAL=1 00:01:16.689 ++ SPDK_RUN_ASAN=1 00:01:16.689 ++ SPDK_RUN_UBSAN=1 00:01:16.689 ++ SPDK_TEST_XNVME=1 00:01:16.689 ++ SPDK_TEST_NVME_FDP=1 00:01:16.689 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.689 ++ RUN_NIGHTLY=1 00:01:16.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.689 + [[ -n '' ]] 00:01:16.689 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.689 + for M in /var/spdk/build-*-manifest.txt 00:01:16.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:16.689 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.689 + for M in /var/spdk/build-*-manifest.txt 00:01:16.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.689 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.689 + for M in /var/spdk/build-*-manifest.txt 00:01:16.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.689 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.689 ++ uname 00:01:16.689 + [[ Linux == \L\i\n\u\x ]] 00:01:16.689 + sudo dmesg -T 00:01:16.689 + sudo dmesg --clear 00:01:16.689 + dmesg_pid=5025 00:01:16.689 + [[ Fedora Linux == FreeBSD ]] 00:01:16.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.689 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.689 + sudo dmesg -Tw 00:01:16.689 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.689 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.689 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.689 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.950 01:21:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.950 01:21:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.950 01:21:25 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:16.950 01:21:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:16.950 01:21:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.950 01:21:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.950 01:21:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:16.950 01:21:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:16.950 01:21:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.950 01:21:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.950 01:21:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.950 01:21:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.950 01:21:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.950 01:21:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.950 01:21:25 -- paths/export.sh@5 -- $ export PATH 00:01:16.950 01:21:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.950 01:21:25 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:16.950 01:21:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:16.950 01:21:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731806485.XXXXXX 00:01:16.950 01:21:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731806485.egX58y 00:01:16.950 01:21:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:16.950 01:21:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:16.950 01:21:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:16.951 01:21:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:16.951 01:21:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.951 01:21:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:16.951 01:21:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:16.951 01:21:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.951 01:21:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:16.951 01:21:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:16.951 01:21:25 -- pm/common@17 -- $ local monitor 00:01:16.951 01:21:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.951 01:21:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.951 01:21:25 -- pm/common@25 -- $ sleep 1 00:01:16.951 01:21:25 -- pm/common@21 -- $ date +%s 00:01:16.951 01:21:25 -- pm/common@21 -- $ date +%s 00:01:16.951 01:21:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806485 00:01:16.951 01:21:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806485 00:01:16.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806485_collect-cpu-load.pm.log 00:01:16.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806485_collect-vmstat.pm.log 00:01:17.891 01:21:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:17.891 01:21:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.891 01:21:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.891 01:21:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.891 01:21:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.891 Sun Nov 17 01:21:26 AM UTC 2024 00:01:17.891 01:21:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.891 v25.01-pre-189-g83e8405e4 00:01:17.891 01:21:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.891 01:21:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.891 01:21:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:17.891 01:21:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:17.891 01:21:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.891 ************************************ 00:01:17.891 START TEST asan 00:01:17.892 ************************************ 00:01:17.892 using asan 00:01:17.892 01:21:26 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:17.892 00:01:17.892 real 0m0.000s 00:01:17.892 user 0m0.000s 00:01:17.892 sys 0m0.000s 00:01:17.892 01:21:26 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:17.892 ************************************ 00:01:17.892 END TEST asan 00:01:17.892 ************************************ 00:01:17.892 01:21:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.892 01:21:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.892 01:21:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.892 01:21:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:17.892 01:21:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:17.892 01:21:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.152 ************************************ 00:01:18.152 START TEST ubsan 00:01:18.152 ************************************ 00:01:18.152 using ubsan 00:01:18.152 01:21:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:18.152 00:01:18.152 real 0m0.000s 00:01:18.152 user 0m0.000s 00:01:18.152 sys 0m0.000s 00:01:18.152 ************************************ 00:01:18.153 END TEST ubsan 00:01:18.153 ************************************ 00:01:18.153 01:21:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:18.153 01:21:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.153 01:21:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:18.153 01:21:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:18.153 01:21:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:18.153 01:21:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:18.153 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.153 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.724 Using 'verbs' RDMA provider 00:01:29.760 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:41.995 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:41.995 Creating mk/config.mk...done. 00:01:41.995 Creating mk/cc.flags.mk...done. 00:01:41.995 Type 'make' to build. 00:01:41.995 01:21:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:41.995 01:21:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:41.995 01:21:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:41.995 01:21:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.995 ************************************ 00:01:41.995 START TEST make 00:01:41.995 ************************************ 00:01:41.995 01:21:49 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:41.995 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:41.995 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:41.995 meson setup builddir \ 00:01:41.995 -Dwith-libaio=enabled \ 00:01:41.995 -Dwith-liburing=enabled \ 00:01:41.995 -Dwith-libvfn=disabled \ 00:01:41.995 -Dwith-spdk=disabled \ 00:01:41.995 -Dexamples=false \ 00:01:41.995 -Dtests=false \ 00:01:41.995 -Dtools=false && \ 00:01:41.995 meson compile -C builddir && \ 00:01:41.995 cd -) 00:01:41.995 make[1]: Nothing to be done for 'all'. 00:01:43.379 The Meson build system 00:01:43.379 Version: 1.5.0 00:01:43.379 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:43.379 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:43.379 Build type: native build 00:01:43.379 Project name: xnvme 00:01:43.379 Project version: 0.7.5 00:01:43.379 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.379 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.379 Host machine cpu family: x86_64 00:01:43.379 Host machine cpu: x86_64 00:01:43.379 Message: host_machine.system: linux 00:01:43.379 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:43.379 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:43.379 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:43.379 Run-time dependency threads found: YES 00:01:43.379 Has header "setupapi.h" : NO 00:01:43.379 Has header "linux/blkzoned.h" : YES 00:01:43.379 Has header "linux/blkzoned.h" : YES (cached) 00:01:43.379 Has header "libaio.h" : YES 00:01:43.379 Library aio found: YES 00:01:43.379 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.379 Run-time dependency liburing found: YES 2.2 00:01:43.379 Dependency libvfn skipped: feature with-libvfn disabled 00:01:43.379 Found CMake: /usr/bin/cmake (3.27.7) 00:01:43.379 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:43.379 Subproject spdk : skipped: feature with-spdk disabled 00:01:43.379 Run-time dependency appleframeworks found: NO (tried framework) 00:01:43.379 Run-time dependency appleframeworks found: NO (tried framework) 00:01:43.379 Library rt found: YES 00:01:43.379 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:43.379 Configuring xnvme_config.h using configuration 00:01:43.379 Configuring xnvme.spec using configuration 00:01:43.379 Run-time dependency bash-completion found: YES 2.11 00:01:43.379 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:43.379 Program cp found: YES (/usr/bin/cp) 00:01:43.379 Build targets in project: 3 00:01:43.379 00:01:43.379 xnvme 0.7.5 00:01:43.379 00:01:43.379 Subprojects 00:01:43.379 spdk : NO Feature 'with-spdk' disabled 00:01:43.379 00:01:43.379 User defined options 00:01:43.379 examples : false 00:01:43.379 tests : false 00:01:43.379 tools : false 00:01:43.379 with-libaio : enabled 00:01:43.379 with-liburing: enabled 00:01:43.379 with-libvfn : disabled 00:01:43.379 with-spdk : disabled 00:01:43.379 00:01:43.379 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.640 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:43.640 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:01:43.901 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:01:43.901 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:01:43.901 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:01:43.901 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:01:43.901 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:01:43.901 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:01:43.901 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:01:43.901 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:01:43.901 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:01:43.901 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:01:43.901 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:01:43.901 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:01:43.901 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:01:43.901 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:01:43.901 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:01:43.901 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:01:43.901 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:01:43.901 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:01:43.901 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:01:43.901 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:01:43.901 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:01:43.901 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:01:43.901 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:01:44.162 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:01:44.162 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:01:44.162 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:01:44.162 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:01:44.162 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:01:44.162 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:01:44.162 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:01:44.162 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:01:44.162 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:01:44.162 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:01:44.162 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:01:44.162 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:01:44.162 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:01:44.162 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:01:44.162 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:01:44.162 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:01:44.162 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:01:44.162 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:01:44.162 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:01:44.162 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:01:44.162 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:01:44.162 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:01:44.162 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:01:44.162 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:01:44.162 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:01:44.162 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:01:44.162 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:01:44.162 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:01:44.162 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:01:44.162 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:01:44.162 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:01:44.162 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:01:44.162 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:01:44.162 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:01:44.162 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:01:44.162 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:01:44.423 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:01:44.423 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:01:44.423 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:01:44.423 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:01:44.423 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:01:44.423 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:01:44.423 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:01:44.423 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:01:44.423 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:01:44.423 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:01:44.423 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:01:44.423 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:01:44.423 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:01:44.994 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:01:44.994 [75/76] Linking static target lib/libxnvme.a 00:01:44.994 [76/76] Linking target lib/libxnvme.so.0.7.5 00:01:44.994 INFO: autodetecting backend as ninja 00:01:44.994 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:44.994 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:51.579 The Meson build system 00:01:51.579 Version: 1.5.0 00:01:51.579 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:51.579 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:51.579 Build type: native build 00:01:51.579 Program cat found: YES (/usr/bin/cat) 00:01:51.579 Project name: DPDK 00:01:51.579 Project version: 24.03.0 00:01:51.579 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.579 C linker for the host machine: cc ld.bfd 2.40-14 00:01:51.579 Host machine cpu family: x86_64 00:01:51.579 Host machine cpu: x86_64 00:01:51.579 Message: ## Building in Developer Mode ## 00:01:51.579 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.579 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.579 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.579 Program python3 found: YES (/usr/bin/python3) 00:01:51.579 Program cat found: YES (/usr/bin/cat) 00:01:51.579 Compiler for C supports arguments -march=native: YES 00:01:51.579 Checking for size of "void *" : 8 00:01:51.579 Checking for size of "void *" : 8 (cached) 00:01:51.579 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:51.579 Library m found: YES 00:01:51.579 Library numa found: YES 00:01:51.579 Has header "numaif.h" : YES 00:01:51.579 Library fdt found: NO 00:01:51.579 Library execinfo found: NO 00:01:51.579 Has header "execinfo.h" : YES 00:01:51.579 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.579 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.579 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.579 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.579 Run-time dependency openssl found: YES 3.1.1 00:01:51.579 Run-time dependency libpcap found: YES 1.10.4 00:01:51.579 Has header "pcap.h" with dependency libpcap: YES 00:01:51.579 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.579 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.579 Compiler for C supports arguments -Wformat: YES 00:01:51.579 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.579 Compiler for C supports arguments -Wformat-security: NO 00:01:51.579 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.579 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.579 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.579 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.579 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.579 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.579 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.579 Compiler for C supports arguments -Wundef: YES 00:01:51.579 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.579 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.579 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.579 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.579 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.579 Program objdump found: YES (/usr/bin/objdump) 00:01:51.579 Compiler for C supports arguments -mavx512f: YES 00:01:51.579 Checking if "AVX512 checking" compiles: YES 00:01:51.579 Fetching value of define "__SSE4_2__" : 1 00:01:51.579 Fetching value of define "__AES__" : 1 00:01:51.579 Fetching value of define "__AVX__" : 1 00:01:51.579 Fetching value of define "__AVX2__" : 1 00:01:51.579 Fetching value of define "__AVX512BW__" : 1 00:01:51.579 Fetching value of define "__AVX512CD__" : 1 00:01:51.579 Fetching value of define "__AVX512DQ__" : 1 00:01:51.579 Fetching value of define "__AVX512F__" : 1 00:01:51.579 Fetching value of define "__AVX512VL__" : 1 00:01:51.579 Fetching value of define "__PCLMUL__" : 1 00:01:51.579 Fetching value of define "__RDRND__" : 1 00:01:51.579 Fetching value of define "__RDSEED__" : 1 00:01:51.579 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.579 Fetching value of define "__znver1__" : (undefined) 00:01:51.579 Fetching value of define "__znver2__" : (undefined) 00:01:51.579 Fetching value of define "__znver3__" : (undefined) 00:01:51.579 Fetching value of define "__znver4__" : (undefined) 00:01:51.579 Library asan found: YES 00:01:51.579 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.579 Message: lib/log: Defining dependency "log" 00:01:51.579 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.579 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.579 Library rt found: YES 00:01:51.579 Checking for function "getentropy" : NO 00:01:51.579 Message: lib/eal: Defining dependency "eal" 00:01:51.579 Message: lib/ring: Defining dependency "ring" 00:01:51.579 Message: lib/rcu: Defining dependency "rcu" 00:01:51.579 Message: lib/mempool: Defining dependency "mempool" 00:01:51.579 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.579 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.579 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.579 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.579 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.579 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.579 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.579 Compiler for C supports arguments -mpclmul: YES 00:01:51.579 Compiler for C supports arguments -maes: YES 00:01:51.579 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.579 Compiler for C supports arguments -mavx512bw: YES 00:01:51.579 Compiler for C supports arguments -mavx512dq: YES 00:01:51.579 Compiler for C supports arguments -mavx512vl: YES 00:01:51.579 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.579 Compiler for C supports arguments -mavx2: YES 00:01:51.579 Compiler for C supports arguments -mavx: YES 00:01:51.579 Message: lib/net: Defining dependency "net" 00:01:51.579 Message: lib/meter: Defining dependency "meter" 00:01:51.579 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.579 Message: lib/pci: Defining dependency "pci" 00:01:51.579 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.579 Message: lib/hash: Defining dependency "hash" 00:01:51.579 Message: lib/timer: Defining dependency "timer" 00:01:51.579 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.579 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.579 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.579 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.579 Message: lib/power: Defining dependency "power" 00:01:51.579 Message: lib/reorder: Defining dependency "reorder" 00:01:51.579 Message: lib/security: Defining dependency "security" 00:01:51.579 Has header "linux/userfaultfd.h" : YES 00:01:51.579 Has header "linux/vduse.h" : YES 00:01:51.579 Message: lib/vhost: Defining dependency "vhost" 00:01:51.579 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.579 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.579 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.579 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.579 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.579 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.579 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.579 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.579 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.579 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.579 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:51.579 Configuring doxy-api-html.conf using configuration 00:01:51.579 Configuring doxy-api-man.conf using configuration 00:01:51.579 Program mandb found: YES (/usr/bin/mandb) 00:01:51.579 Program sphinx-build found: NO 00:01:51.579 Configuring rte_build_config.h using configuration 00:01:51.579 Message: 00:01:51.579 ================= 00:01:51.579 Applications Enabled 00:01:51.579 ================= 00:01:51.579 00:01:51.579 apps: 00:01:51.579 00:01:51.579 00:01:51.579 Message: 00:01:51.579 ================= 00:01:51.579 Libraries Enabled 00:01:51.579 ================= 00:01:51.579 00:01:51.579 libs: 00:01:51.579 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.579 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.579 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.579 00:01:51.579 Message: 00:01:51.579 =============== 00:01:51.579 Drivers Enabled 00:01:51.579 =============== 00:01:51.579 00:01:51.579 common: 00:01:51.579 00:01:51.579 bus: 00:01:51.579 pci, vdev, 00:01:51.579 mempool: 00:01:51.579 ring, 00:01:51.580 dma: 00:01:51.580 00:01:51.580 net: 00:01:51.580 00:01:51.580 crypto: 00:01:51.580 00:01:51.580 compress: 00:01:51.580 00:01:51.580 vdpa: 00:01:51.580 00:01:51.580 00:01:51.580 Message: 00:01:51.580 ================= 00:01:51.580 Content Skipped 00:01:51.580 ================= 00:01:51.580 00:01:51.580 apps: 00:01:51.580 dumpcap: explicitly disabled via build config 00:01:51.580 graph: explicitly disabled via build config 00:01:51.580 pdump: explicitly disabled via build config 00:01:51.580 proc-info: explicitly disabled via build config 00:01:51.580 test-acl: explicitly disabled via build config 00:01:51.580 test-bbdev: explicitly disabled via build config 00:01:51.580 test-cmdline: explicitly disabled via build config 00:01:51.580 test-compress-perf: explicitly disabled via build config 00:01:51.580 test-crypto-perf: explicitly disabled via build config 00:01:51.580 test-dma-perf: explicitly disabled via build config 00:01:51.580 test-eventdev: explicitly disabled via build config 00:01:51.580 test-fib: explicitly disabled via build config 00:01:51.580 test-flow-perf: explicitly disabled via build config 00:01:51.580 test-gpudev: explicitly disabled via build config 00:01:51.580 test-mldev: explicitly disabled via build config 00:01:51.580 test-pipeline: explicitly disabled via build config 00:01:51.580 test-pmd: explicitly disabled via build config 00:01:51.580 test-regex: explicitly disabled via build config 00:01:51.580 test-sad: explicitly disabled via build config 00:01:51.580 test-security-perf: explicitly disabled via build config 00:01:51.580 00:01:51.580 libs: 00:01:51.580 argparse: explicitly disabled via build config 00:01:51.580 metrics: explicitly disabled via build config 00:01:51.580 acl: explicitly disabled via build config 00:01:51.580 bbdev: explicitly disabled via build config 00:01:51.580 bitratestats: explicitly disabled via build config 00:01:51.580 bpf: explicitly disabled via build config 00:01:51.580 cfgfile: explicitly disabled via build config 00:01:51.580 distributor: explicitly disabled via build config 00:01:51.580 efd: explicitly disabled via build config 00:01:51.580 eventdev: explicitly disabled via build config 00:01:51.580 dispatcher: explicitly disabled via build config 00:01:51.580 gpudev: explicitly disabled via build config 00:01:51.580 gro: explicitly disabled via build config 00:01:51.580 gso: explicitly disabled via build config 00:01:51.580 ip_frag: explicitly disabled via build config 00:01:51.580 jobstats: explicitly disabled via build config 00:01:51.580 latencystats: explicitly disabled via build config 00:01:51.580 lpm: explicitly disabled via build config 00:01:51.580 member: explicitly disabled via build config 00:01:51.580 pcapng: explicitly disabled via build config 00:01:51.580 rawdev: explicitly disabled via build config 00:01:51.580 regexdev: explicitly disabled via build config 00:01:51.580 mldev: explicitly disabled via build config 00:01:51.580 rib: explicitly disabled via build config 00:01:51.580 sched: explicitly disabled via build config 00:01:51.580 stack: explicitly disabled via build config 00:01:51.580 ipsec: explicitly disabled via build config 00:01:51.580 pdcp: explicitly disabled via build config 00:01:51.580 fib: explicitly disabled via build config 00:01:51.580 port: explicitly disabled via build config 00:01:51.580 pdump: explicitly disabled via build config 00:01:51.580 table: explicitly disabled via build config 00:01:51.580 pipeline: explicitly disabled via build config 00:01:51.580 graph: explicitly disabled via build config 00:01:51.580 node: explicitly disabled via build config 00:01:51.580 00:01:51.580 drivers: 00:01:51.580 common/cpt: not in enabled drivers build config 00:01:51.580 common/dpaax: not in enabled drivers build config 00:01:51.580 common/iavf: not in enabled drivers build config 00:01:51.580 common/idpf: not in enabled drivers build config 00:01:51.580 common/ionic: not in enabled drivers build config 00:01:51.580 common/mvep: not in enabled drivers build config 00:01:51.580 common/octeontx: not in enabled drivers build config 00:01:51.580 bus/auxiliary: not in enabled drivers build config 00:01:51.580 bus/cdx: not in enabled drivers build config 00:01:51.580 bus/dpaa: not in enabled drivers build config 00:01:51.580 bus/fslmc: not in enabled drivers build config 00:01:51.580 bus/ifpga: not in enabled drivers build config 00:01:51.580 bus/platform: not in enabled drivers build config 00:01:51.580 bus/uacce: not in enabled drivers build config 00:01:51.580 bus/vmbus: not in enabled drivers build config 00:01:51.580 common/cnxk: not in enabled drivers build config 00:01:51.580 common/mlx5: not in enabled drivers build config 00:01:51.580 common/nfp: not in enabled drivers build config 00:01:51.580 common/nitrox: not in enabled drivers build config 00:01:51.580 common/qat: not in enabled drivers build config 00:01:51.580 common/sfc_efx: not in enabled drivers build config 00:01:51.580 mempool/bucket: not in enabled drivers build config 00:01:51.580 mempool/cnxk: not in enabled drivers build config 00:01:51.580 mempool/dpaa: not in enabled drivers build config 00:01:51.580 mempool/dpaa2: not in enabled drivers build config 00:01:51.580 mempool/octeontx: not in enabled drivers build config 00:01:51.580 mempool/stack: not in enabled drivers build config 00:01:51.580 dma/cnxk: not in enabled drivers build config 00:01:51.580 dma/dpaa: not in enabled drivers build config 00:01:51.580 dma/dpaa2: not in enabled drivers build config 00:01:51.580 dma/hisilicon: not in enabled drivers build config 00:01:51.580 dma/idxd: not in enabled drivers build config 00:01:51.580 dma/ioat: not in enabled drivers build config 00:01:51.580 dma/skeleton: not in enabled drivers build config 00:01:51.580 net/af_packet: not in enabled drivers build config 00:01:51.580 net/af_xdp: not in enabled drivers build config 00:01:51.580 net/ark: not in enabled drivers build config 00:01:51.580 net/atlantic: not in enabled drivers build config 00:01:51.580 net/avp: not in enabled drivers build config 00:01:51.580 net/axgbe: not in enabled drivers build config 00:01:51.580 net/bnx2x: not in enabled drivers build config 00:01:51.580 net/bnxt: not in enabled drivers build config 00:01:51.580 net/bonding: not in enabled drivers build config 00:01:51.580 net/cnxk: not in enabled drivers build config 00:01:51.580 net/cpfl: not in enabled drivers build config 00:01:51.580 net/cxgbe: not in enabled drivers build config 00:01:51.580 net/dpaa: not in enabled drivers build config 00:01:51.580 net/dpaa2: not in enabled drivers build config 00:01:51.580 net/e1000: not in enabled drivers build config 00:01:51.580 net/ena: not in enabled drivers build config 00:01:51.580 net/enetc: not in enabled drivers build config 00:01:51.580 net/enetfec: not in enabled drivers build config 00:01:51.580 net/enic: not in enabled drivers build config 00:01:51.580 net/failsafe: not in enabled drivers build config 00:01:51.580 net/fm10k: not in enabled drivers build config 00:01:51.580 net/gve: not in enabled drivers build config 00:01:51.580 net/hinic: not in enabled drivers build config 00:01:51.580 net/hns3: not in enabled drivers build config 00:01:51.580 net/i40e: not in enabled drivers build config 00:01:51.580 net/iavf: not in enabled drivers build config 00:01:51.580 net/ice: not in enabled drivers build config 00:01:51.580 net/idpf: not in enabled drivers build config 00:01:51.580 net/igc: not in enabled drivers build config 00:01:51.580 net/ionic: not in enabled drivers build config 00:01:51.580 net/ipn3ke: not in enabled drivers build config 00:01:51.580 net/ixgbe: not in enabled drivers build config 00:01:51.580 net/mana: not in enabled drivers build config 00:01:51.580 net/memif: not in enabled drivers build config 00:01:51.580 net/mlx4: not in enabled drivers build config 00:01:51.580 net/mlx5: not in enabled drivers build config 00:01:51.580 net/mvneta: not in enabled drivers build config 00:01:51.580 net/mvpp2: not in enabled drivers build config 00:01:51.580 net/netvsc: not in enabled drivers build config 00:01:51.580 net/nfb: not in enabled drivers build config 00:01:51.581 net/nfp: not in enabled drivers build config 00:01:51.581 net/ngbe: not in enabled drivers build config 00:01:51.581 net/null: not in enabled drivers build config 00:01:51.581 net/octeontx: not in enabled drivers build config 00:01:51.581 net/octeon_ep: not in enabled drivers build config 00:01:51.581 net/pcap: not in enabled drivers build config 00:01:51.581 net/pfe: not in enabled drivers build config 00:01:51.581 net/qede: not in enabled drivers build config 00:01:51.581 net/ring: not in enabled drivers build config 00:01:51.581 net/sfc: not in enabled drivers build config 00:01:51.581 net/softnic: not in enabled drivers build config 00:01:51.581 net/tap: not in enabled drivers build config 00:01:51.581 net/thunderx: not in enabled drivers build config 00:01:51.581 net/txgbe: not in enabled drivers build config 00:01:51.581 net/vdev_netvsc: not in enabled drivers build config 00:01:51.581 net/vhost: not in enabled drivers build config 00:01:51.581 net/virtio: not in enabled drivers build config 00:01:51.581 net/vmxnet3: not in enabled drivers build config 00:01:51.581 raw/*: missing internal dependency, "rawdev" 00:01:51.581 crypto/armv8: not in enabled drivers build config 00:01:51.581 crypto/bcmfs: not in enabled drivers build config 00:01:51.581 crypto/caam_jr: not in enabled drivers build config 00:01:51.581 crypto/ccp: not in enabled drivers build config 00:01:51.581 crypto/cnxk: not in enabled drivers build config 00:01:51.581 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.581 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.581 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.581 crypto/mlx5: not in enabled drivers build config 00:01:51.581 crypto/mvsam: not in enabled drivers build config 00:01:51.581 crypto/nitrox: not in enabled drivers build config 00:01:51.581 crypto/null: not in enabled drivers build config 00:01:51.581 crypto/octeontx: not in enabled drivers build config 00:01:51.581 crypto/openssl: not in enabled drivers build config 00:01:51.581 crypto/scheduler: not in enabled drivers build config 00:01:51.581 crypto/uadk: not in enabled drivers build config 00:01:51.581 crypto/virtio: not in enabled drivers build config 00:01:51.581 compress/isal: not in enabled drivers build config 00:01:51.581 compress/mlx5: not in enabled drivers build config 00:01:51.581 compress/nitrox: not in enabled drivers build config 00:01:51.581 compress/octeontx: not in enabled drivers build config 00:01:51.581 compress/zlib: not in enabled drivers build config 00:01:51.581 regex/*: missing internal dependency, "regexdev" 00:01:51.581 ml/*: missing internal dependency, "mldev" 00:01:51.581 vdpa/ifc: not in enabled drivers build config 00:01:51.581 vdpa/mlx5: not in enabled drivers build config 00:01:51.581 vdpa/nfp: not in enabled drivers build config 00:01:51.581 vdpa/sfc: not in enabled drivers build config 00:01:51.581 event/*: missing internal dependency, "eventdev" 00:01:51.581 baseband/*: missing internal dependency, "bbdev" 00:01:51.581 gpu/*: missing internal dependency, "gpudev" 00:01:51.581 00:01:51.581 00:01:51.581 Build targets in project: 84 00:01:51.581 00:01:51.581 DPDK 24.03.0 00:01:51.581 00:01:51.581 User defined options 00:01:51.581 buildtype : debug 00:01:51.581 default_library : shared 00:01:51.581 libdir : lib 00:01:51.581 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.581 b_sanitize : address 00:01:51.581 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.581 c_link_args : 00:01:51.581 cpu_instruction_set: native 00:01:51.581 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:51.581 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:51.581 enable_docs : false 00:01:51.581 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.581 enable_kmods : false 00:01:51.581 max_lcores : 128 00:01:51.581 tests : false 00:01:51.581 00:01:51.581 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.581 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:51.581 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.581 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.581 [3/267] Linking static target lib/librte_kvargs.a 00:01:51.581 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.581 [5/267] Linking static target lib/librte_log.a 00:01:51.581 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.842 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.842 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.842 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.842 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.842 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.842 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.842 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.842 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.842 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.842 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.842 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.842 [18/267] Linking static target lib/librte_telemetry.a 00:01:52.102 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.102 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.363 [21/267] Linking target lib/librte_log.so.24.1 00:01:52.363 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.363 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.363 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.363 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.363 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.363 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.363 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.363 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.363 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.363 [31/267] Linking target lib/librte_kvargs.so.24.1 00:01:52.624 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.624 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.624 [34/267] Linking target lib/librte_telemetry.so.24.1 00:01:52.624 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.624 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.624 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.624 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.624 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.624 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.884 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.884 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.884 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.884 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.884 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.884 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.884 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.142 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.143 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.143 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.143 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.143 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.143 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.143 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.143 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.143 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.143 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.403 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.403 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.403 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.403 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.403 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.403 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.403 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.663 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.663 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.663 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.663 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.924 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.924 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.924 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.924 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.924 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.924 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.924 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.924 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.924 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.924 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.185 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.185 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.185 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.185 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.185 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.446 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.446 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.446 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.446 [87/267] Linking static target lib/librte_eal.a 00:01:54.446 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.446 [89/267] Linking static target lib/librte_ring.a 00:01:54.446 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.446 [91/267] Linking static target lib/librte_rcu.a 00:01:54.446 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.446 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.446 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.707 [95/267] Linking static target lib/librte_mempool.a 00:01:54.707 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.707 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.707 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.707 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:54.707 [100/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.968 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.968 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.968 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.968 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.968 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.968 [106/267] Linking static target lib/librte_meter.a 00:01:54.968 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.229 [108/267] Linking static target lib/librte_net.a 00:01:55.229 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.229 [110/267] Linking static target lib/librte_mbuf.a 00:01:55.229 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.229 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.229 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.490 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.490 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.490 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.490 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.490 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.751 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.751 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.751 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.012 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.012 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.012 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.012 [125/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.012 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.012 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.012 [128/267] Linking static target lib/librte_pci.a 00:01:56.274 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.274 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.274 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.274 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.274 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.274 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.274 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.274 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.274 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.274 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:56.274 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.274 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.274 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.274 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.535 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.535 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:56.535 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.535 [146/267] Linking static target lib/librte_cmdline.a 00:01:56.535 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:56.535 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.535 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.796 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.796 [151/267] Linking static target lib/librte_timer.a 00:01:56.796 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.796 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.058 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.058 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.058 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.058 [157/267] Linking static target lib/librte_compressdev.a 00:01:57.058 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.058 [159/267] Linking static target lib/librte_hash.a 00:01:57.318 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.319 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.319 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.319 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.319 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.319 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.580 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.580 [167/267] Linking static target lib/librte_ethdev.a 00:01:57.580 [168/267] Linking static target lib/librte_dmadev.a 00:01:57.580 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.580 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.580 [171/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.841 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.841 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.841 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.841 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.102 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.102 [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.102 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.102 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.102 [180/267] Linking static target lib/librte_cryptodev.a 00:01:58.102 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.102 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.102 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.102 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.102 [185/267] Linking static target lib/librte_power.a 00:01:58.363 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.364 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.364 [188/267] Linking static target lib/librte_reorder.a 00:01:58.364 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.665 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.665 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.665 [192/267] Linking static target lib/librte_security.a 00:01:58.665 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.665 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.926 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.187 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.187 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.187 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.187 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.187 [200/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.446 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.446 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.446 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.446 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.446 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.704 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.704 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.704 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.704 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.704 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.704 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.705 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.705 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.705 [214/267] Linking static target drivers/librte_bus_vdev.a 00:01:59.963 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.963 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.963 [217/267] Linking static target drivers/librte_bus_pci.a 00:01:59.963 [218/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.963 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.963 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.963 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.963 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.963 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.963 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:00.221 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.221 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.815 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.753 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.753 [229/267] Linking target lib/librte_eal.so.24.1 00:02:01.753 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.753 [231/267] Linking target lib/librte_meter.so.24.1 00:02:01.753 [232/267] Linking target lib/librte_ring.so.24.1 00:02:01.753 [233/267] Linking target lib/librte_pci.so.24.1 00:02:01.753 [234/267] Linking target lib/librte_timer.so.24.1 00:02:01.753 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:01.753 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.754 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.754 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.754 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:02.014 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:02.014 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:02.014 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:02.015 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:02.015 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.015 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.015 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.015 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.015 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:02.273 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:02.273 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:02.273 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:02.273 [252/267] Linking target lib/librte_net.so.24.1 00:02:02.273 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:02.273 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:02.273 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:02.273 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:02.273 [257/267] Linking target lib/librte_security.so.24.1 00:02:02.273 [258/267] Linking target lib/librte_hash.so.24.1 00:02:02.533 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.533 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.793 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:02.793 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.793 [263/267] Linking target lib/librte_power.so.24.1 00:02:03.734 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.734 [265/267] Linking static target lib/librte_vhost.a 00:02:04.668 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.669 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:04.669 INFO: autodetecting backend as ninja 00:02:04.669 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:22.764 CC lib/ut_mock/mock.o 00:02:22.764 CC lib/ut/ut.o 00:02:22.764 CC lib/log/log.o 00:02:22.764 CC lib/log/log_deprecated.o 00:02:22.764 CC lib/log/log_flags.o 00:02:22.764 LIB libspdk_ut_mock.a 00:02:22.764 SO libspdk_ut_mock.so.6.0 00:02:22.764 LIB libspdk_log.a 00:02:22.764 LIB libspdk_ut.a 00:02:22.764 SO libspdk_log.so.7.1 00:02:22.764 SO libspdk_ut.so.2.0 00:02:22.764 SYMLINK libspdk_ut_mock.so 00:02:22.764 SYMLINK libspdk_ut.so 00:02:22.764 SYMLINK libspdk_log.so 00:02:22.764 CXX lib/trace_parser/trace.o 00:02:22.764 CC lib/ioat/ioat.o 00:02:22.764 CC lib/dma/dma.o 00:02:22.764 CC lib/util/bit_array.o 00:02:22.764 CC lib/util/base64.o 00:02:22.764 CC lib/util/crc16.o 00:02:22.764 CC lib/util/cpuset.o 00:02:22.764 CC lib/util/crc32.o 00:02:22.764 CC lib/util/crc32c.o 00:02:22.764 CC lib/util/crc32_ieee.o 00:02:22.764 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.764 CC lib/util/crc64.o 00:02:22.764 CC lib/util/dif.o 00:02:22.764 LIB libspdk_dma.a 00:02:22.764 CC lib/util/fd.o 00:02:22.764 SO libspdk_dma.so.5.0 00:02:22.764 CC lib/util/fd_group.o 00:02:22.764 CC lib/vfio_user/host/vfio_user.o 00:02:22.764 CC lib/util/file.o 00:02:22.764 SYMLINK libspdk_dma.so 00:02:22.764 CC lib/util/hexlify.o 00:02:22.764 CC lib/util/iov.o 00:02:22.764 LIB libspdk_ioat.a 00:02:22.764 SO libspdk_ioat.so.7.0 00:02:22.764 CC lib/util/math.o 00:02:22.764 CC lib/util/net.o 00:02:22.764 CC lib/util/pipe.o 00:02:22.764 SYMLINK libspdk_ioat.so 00:02:22.764 CC lib/util/strerror_tls.o 00:02:22.764 CC lib/util/string.o 00:02:22.764 CC lib/util/uuid.o 00:02:22.764 CC lib/util/xor.o 00:02:22.764 LIB libspdk_vfio_user.a 00:02:22.764 SO libspdk_vfio_user.so.5.0 00:02:22.764 CC lib/util/zipf.o 00:02:22.764 CC lib/util/md5.o 00:02:22.764 SYMLINK libspdk_vfio_user.so 00:02:22.764 LIB libspdk_util.a 00:02:22.764 LIB libspdk_trace_parser.a 00:02:22.764 SO libspdk_trace_parser.so.6.0 00:02:22.764 SO libspdk_util.so.10.1 00:02:22.764 SYMLINK libspdk_trace_parser.so 00:02:22.764 SYMLINK libspdk_util.so 00:02:22.764 CC lib/json/json_parse.o 00:02:22.764 CC lib/json/json_util.o 00:02:22.764 CC lib/json/json_write.o 00:02:22.764 CC lib/env_dpdk/env.o 00:02:22.764 CC lib/env_dpdk/memory.o 00:02:22.764 CC lib/vmd/vmd.o 00:02:22.764 CC lib/vmd/led.o 00:02:22.764 CC lib/rdma_utils/rdma_utils.o 00:02:22.764 CC lib/conf/conf.o 00:02:22.764 CC lib/idxd/idxd.o 00:02:22.764 CC lib/idxd/idxd_user.o 00:02:22.764 CC lib/idxd/idxd_kernel.o 00:02:22.764 CC lib/env_dpdk/pci.o 00:02:22.764 LIB libspdk_rdma_utils.a 00:02:22.764 LIB libspdk_conf.a 00:02:22.764 LIB libspdk_json.a 00:02:22.764 SO libspdk_rdma_utils.so.1.0 00:02:22.764 SO libspdk_conf.so.6.0 00:02:22.765 SO libspdk_json.so.6.0 00:02:22.765 CC lib/env_dpdk/init.o 00:02:22.765 SYMLINK libspdk_rdma_utils.so 00:02:22.765 CC lib/env_dpdk/threads.o 00:02:22.765 SYMLINK libspdk_conf.so 00:02:22.765 CC lib/env_dpdk/pci_ioat.o 00:02:22.765 CC lib/env_dpdk/pci_virtio.o 00:02:22.765 SYMLINK libspdk_json.so 00:02:22.765 CC lib/env_dpdk/pci_vmd.o 00:02:23.023 CC lib/env_dpdk/pci_idxd.o 00:02:23.023 CC lib/env_dpdk/pci_event.o 00:02:23.023 CC lib/env_dpdk/sigbus_handler.o 00:02:23.023 CC lib/env_dpdk/pci_dpdk.o 00:02:23.023 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.023 LIB libspdk_vmd.a 00:02:23.023 SO libspdk_vmd.so.6.0 00:02:23.023 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.023 SYMLINK libspdk_vmd.so 00:02:23.023 LIB libspdk_idxd.a 00:02:23.282 SO libspdk_idxd.so.12.1 00:02:23.282 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.282 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.282 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.282 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.282 CC lib/rdma_provider/common.o 00:02:23.282 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.282 SYMLINK libspdk_idxd.so 00:02:23.540 LIB libspdk_rdma_provider.a 00:02:23.540 LIB libspdk_jsonrpc.a 00:02:23.540 SO libspdk_rdma_provider.so.7.0 00:02:23.540 SO libspdk_jsonrpc.so.6.0 00:02:23.540 SYMLINK libspdk_rdma_provider.so 00:02:23.540 SYMLINK libspdk_jsonrpc.so 00:02:23.799 CC lib/rpc/rpc.o 00:02:23.799 LIB libspdk_env_dpdk.a 00:02:24.057 SO libspdk_env_dpdk.so.15.1 00:02:24.057 LIB libspdk_rpc.a 00:02:24.057 SO libspdk_rpc.so.6.0 00:02:24.057 SYMLINK libspdk_env_dpdk.so 00:02:24.057 SYMLINK libspdk_rpc.so 00:02:24.057 CC lib/notify/notify.o 00:02:24.316 CC lib/notify/notify_rpc.o 00:02:24.316 CC lib/keyring/keyring.o 00:02:24.316 CC lib/keyring/keyring_rpc.o 00:02:24.316 CC lib/trace/trace_flags.o 00:02:24.316 CC lib/trace/trace.o 00:02:24.316 CC lib/trace/trace_rpc.o 00:02:24.316 LIB libspdk_notify.a 00:02:24.316 SO libspdk_notify.so.6.0 00:02:24.316 LIB libspdk_keyring.a 00:02:24.316 SYMLINK libspdk_notify.so 00:02:24.316 SO libspdk_keyring.so.2.0 00:02:24.316 LIB libspdk_trace.a 00:02:24.575 SYMLINK libspdk_keyring.so 00:02:24.575 SO libspdk_trace.so.11.0 00:02:24.575 SYMLINK libspdk_trace.so 00:02:24.833 CC lib/thread/thread.o 00:02:24.833 CC lib/thread/iobuf.o 00:02:24.833 CC lib/sock/sock.o 00:02:24.833 CC lib/sock/sock_rpc.o 00:02:25.092 LIB libspdk_sock.a 00:02:25.092 SO libspdk_sock.so.10.0 00:02:25.352 SYMLINK libspdk_sock.so 00:02:25.352 CC lib/nvme/nvme_ctrlr.o 00:02:25.352 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.352 CC lib/nvme/nvme_ns.o 00:02:25.352 CC lib/nvme/nvme_ns_cmd.o 00:02:25.352 CC lib/nvme/nvme_fabric.o 00:02:25.352 CC lib/nvme/nvme_qpair.o 00:02:25.352 CC lib/nvme/nvme.o 00:02:25.352 CC lib/nvme/nvme_pcie_common.o 00:02:25.352 CC lib/nvme/nvme_pcie.o 00:02:26.288 CC lib/nvme/nvme_quirks.o 00:02:26.288 CC lib/nvme/nvme_transport.o 00:02:26.288 CC lib/nvme/nvme_discovery.o 00:02:26.288 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.288 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.288 LIB libspdk_thread.a 00:02:26.288 CC lib/nvme/nvme_tcp.o 00:02:26.288 SO libspdk_thread.so.11.0 00:02:26.288 CC lib/nvme/nvme_opal.o 00:02:26.288 SYMLINK libspdk_thread.so 00:02:26.288 CC lib/nvme/nvme_io_msg.o 00:02:26.547 CC lib/nvme/nvme_poll_group.o 00:02:26.547 CC lib/nvme/nvme_zns.o 00:02:26.547 CC lib/nvme/nvme_stubs.o 00:02:26.547 CC lib/nvme/nvme_auth.o 00:02:26.547 CC lib/nvme/nvme_cuse.o 00:02:26.806 CC lib/nvme/nvme_rdma.o 00:02:27.065 CC lib/accel/accel.o 00:02:27.065 CC lib/blob/blobstore.o 00:02:27.065 CC lib/init/json_config.o 00:02:27.065 CC lib/init/subsystem.o 00:02:27.065 CC lib/init/subsystem_rpc.o 00:02:27.065 CC lib/init/rpc.o 00:02:27.065 CC lib/accel/accel_rpc.o 00:02:27.323 CC lib/accel/accel_sw.o 00:02:27.323 LIB libspdk_init.a 00:02:27.323 SO libspdk_init.so.6.0 00:02:27.323 SYMLINK libspdk_init.so 00:02:27.323 CC lib/virtio/virtio.o 00:02:27.582 CC lib/virtio/virtio_vhost_user.o 00:02:27.582 CC lib/virtio/virtio_vfio_user.o 00:02:27.582 CC lib/fsdev/fsdev.o 00:02:27.582 CC lib/fsdev/fsdev_io.o 00:02:27.582 CC lib/fsdev/fsdev_rpc.o 00:02:27.582 CC lib/virtio/virtio_pci.o 00:02:27.840 CC lib/blob/request.o 00:02:27.840 CC lib/blob/zeroes.o 00:02:27.840 CC lib/event/app.o 00:02:27.840 CC lib/event/reactor.o 00:02:27.840 LIB libspdk_nvme.a 00:02:27.840 CC lib/blob/blob_bs_dev.o 00:02:27.840 CC lib/event/log_rpc.o 00:02:27.841 LIB libspdk_accel.a 00:02:27.841 CC lib/event/app_rpc.o 00:02:28.098 LIB libspdk_virtio.a 00:02:28.098 SO libspdk_virtio.so.7.0 00:02:28.098 SO libspdk_accel.so.16.0 00:02:28.098 CC lib/event/scheduler_static.o 00:02:28.098 SYMLINK libspdk_accel.so 00:02:28.098 SYMLINK libspdk_virtio.so 00:02:28.098 SO libspdk_nvme.so.15.0 00:02:28.098 LIB libspdk_fsdev.a 00:02:28.098 SO libspdk_fsdev.so.2.0 00:02:28.356 SYMLINK libspdk_fsdev.so 00:02:28.356 LIB libspdk_event.a 00:02:28.356 CC lib/bdev/bdev_zone.o 00:02:28.356 CC lib/bdev/bdev_rpc.o 00:02:28.356 CC lib/bdev/part.o 00:02:28.356 CC lib/bdev/bdev.o 00:02:28.356 SO libspdk_event.so.14.0 00:02:28.356 CC lib/bdev/scsi_nvme.o 00:02:28.356 SYMLINK libspdk_nvme.so 00:02:28.356 SYMLINK libspdk_event.so 00:02:28.356 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:29.291 LIB libspdk_fuse_dispatcher.a 00:02:29.291 SO libspdk_fuse_dispatcher.so.1.0 00:02:29.291 SYMLINK libspdk_fuse_dispatcher.so 00:02:29.857 LIB libspdk_blob.a 00:02:29.857 SO libspdk_blob.so.11.0 00:02:30.116 SYMLINK libspdk_blob.so 00:02:30.116 CC lib/blobfs/blobfs.o 00:02:30.116 CC lib/blobfs/tree.o 00:02:30.116 CC lib/lvol/lvol.o 00:02:31.049 LIB libspdk_blobfs.a 00:02:31.049 SO libspdk_blobfs.so.10.0 00:02:31.049 LIB libspdk_bdev.a 00:02:31.049 SYMLINK libspdk_blobfs.so 00:02:31.049 SO libspdk_bdev.so.17.0 00:02:31.307 LIB libspdk_lvol.a 00:02:31.307 SO libspdk_lvol.so.10.0 00:02:31.307 SYMLINK libspdk_bdev.so 00:02:31.307 SYMLINK libspdk_lvol.so 00:02:31.307 CC lib/scsi/dev.o 00:02:31.307 CC lib/scsi/port.o 00:02:31.307 CC lib/scsi/scsi.o 00:02:31.307 CC lib/scsi/scsi_bdev.o 00:02:31.307 CC lib/scsi/scsi_pr.o 00:02:31.307 CC lib/scsi/lun.o 00:02:31.307 CC lib/ublk/ublk.o 00:02:31.307 CC lib/nvmf/ctrlr.o 00:02:31.307 CC lib/nbd/nbd.o 00:02:31.307 CC lib/ftl/ftl_core.o 00:02:31.565 CC lib/ftl/ftl_init.o 00:02:31.565 CC lib/ftl/ftl_layout.o 00:02:31.565 CC lib/ftl/ftl_debug.o 00:02:31.565 CC lib/ftl/ftl_io.o 00:02:31.836 CC lib/ftl/ftl_sb.o 00:02:31.836 CC lib/ftl/ftl_l2p.o 00:02:31.836 CC lib/ublk/ublk_rpc.o 00:02:31.836 CC lib/nbd/nbd_rpc.o 00:02:31.836 CC lib/scsi/scsi_rpc.o 00:02:31.836 CC lib/ftl/ftl_l2p_flat.o 00:02:31.836 CC lib/ftl/ftl_nv_cache.o 00:02:31.836 CC lib/scsi/task.o 00:02:31.836 CC lib/nvmf/ctrlr_discovery.o 00:02:31.836 CC lib/ftl/ftl_band.o 00:02:31.836 CC lib/ftl/ftl_band_ops.o 00:02:31.836 CC lib/ftl/ftl_writer.o 00:02:31.836 LIB libspdk_nbd.a 00:02:32.165 SO libspdk_nbd.so.7.0 00:02:32.165 CC lib/nvmf/ctrlr_bdev.o 00:02:32.165 LIB libspdk_ublk.a 00:02:32.165 SYMLINK libspdk_nbd.so 00:02:32.165 CC lib/nvmf/subsystem.o 00:02:32.165 SO libspdk_ublk.so.3.0 00:02:32.165 LIB libspdk_scsi.a 00:02:32.165 SO libspdk_scsi.so.9.0 00:02:32.165 SYMLINK libspdk_ublk.so 00:02:32.165 CC lib/nvmf/nvmf.o 00:02:32.165 CC lib/ftl/ftl_rq.o 00:02:32.165 SYMLINK libspdk_scsi.so 00:02:32.165 CC lib/nvmf/nvmf_rpc.o 00:02:32.165 CC lib/nvmf/transport.o 00:02:32.165 CC lib/nvmf/tcp.o 00:02:32.446 CC lib/iscsi/conn.o 00:02:32.446 CC lib/vhost/vhost.o 00:02:32.704 CC lib/vhost/vhost_rpc.o 00:02:32.963 CC lib/ftl/ftl_reloc.o 00:02:32.963 CC lib/vhost/vhost_scsi.o 00:02:32.963 CC lib/nvmf/stubs.o 00:02:32.963 CC lib/iscsi/init_grp.o 00:02:32.963 CC lib/iscsi/iscsi.o 00:02:33.222 CC lib/ftl/ftl_l2p_cache.o 00:02:33.222 CC lib/ftl/ftl_p2l.o 00:02:33.222 CC lib/ftl/ftl_p2l_log.o 00:02:33.222 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.222 CC lib/iscsi/param.o 00:02:33.222 CC lib/iscsi/portal_grp.o 00:02:33.481 CC lib/nvmf/mdns_server.o 00:02:33.481 CC lib/iscsi/tgt_node.o 00:02:33.481 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.481 CC lib/vhost/vhost_blk.o 00:02:33.481 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.481 CC lib/iscsi/iscsi_subsystem.o 00:02:33.481 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.739 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.739 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.739 CC lib/vhost/rte_vhost_user.o 00:02:33.739 CC lib/nvmf/rdma.o 00:02:33.739 CC lib/nvmf/auth.o 00:02:33.739 CC lib/iscsi/iscsi_rpc.o 00:02:33.739 CC lib/iscsi/task.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.998 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:34.256 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:34.256 CC lib/ftl/utils/ftl_conf.o 00:02:34.256 CC lib/ftl/utils/ftl_md.o 00:02:34.256 CC lib/ftl/utils/ftl_mempool.o 00:02:34.256 CC lib/ftl/utils/ftl_bitmap.o 00:02:34.515 CC lib/ftl/utils/ftl_property.o 00:02:34.515 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:34.515 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:34.515 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:34.515 LIB libspdk_iscsi.a 00:02:34.515 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.515 SO libspdk_iscsi.so.8.0 00:02:34.515 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.515 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.515 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.515 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.515 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.515 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.515 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.515 SYMLINK libspdk_iscsi.so 00:02:34.515 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:34.773 LIB libspdk_vhost.a 00:02:34.773 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:34.773 SO libspdk_vhost.so.8.0 00:02:34.773 CC lib/ftl/base/ftl_base_dev.o 00:02:34.773 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.773 CC lib/ftl/ftl_trace.o 00:02:34.773 SYMLINK libspdk_vhost.so 00:02:35.033 LIB libspdk_ftl.a 00:02:35.291 SO libspdk_ftl.so.9.0 00:02:35.291 SYMLINK libspdk_ftl.so 00:02:35.550 LIB libspdk_nvmf.a 00:02:35.809 SO libspdk_nvmf.so.20.0 00:02:35.809 SYMLINK libspdk_nvmf.so 00:02:36.067 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.326 CC module/keyring/linux/keyring.o 00:02:36.326 CC module/keyring/file/keyring.o 00:02:36.326 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.326 CC module/accel/error/accel_error.o 00:02:36.326 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.326 CC module/sock/posix/posix.o 00:02:36.326 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.326 CC module/blob/bdev/blob_bdev.o 00:02:36.326 CC module/fsdev/aio/fsdev_aio.o 00:02:36.326 LIB libspdk_env_dpdk_rpc.a 00:02:36.326 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.326 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.326 CC module/keyring/linux/keyring_rpc.o 00:02:36.326 CC module/keyring/file/keyring_rpc.o 00:02:36.326 LIB libspdk_scheduler_gscheduler.a 00:02:36.326 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:36.326 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.326 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.326 CC module/accel/error/accel_error_rpc.o 00:02:36.326 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.326 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.326 LIB libspdk_keyring_linux.a 00:02:36.326 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.326 SO libspdk_keyring_linux.so.1.0 00:02:36.326 LIB libspdk_scheduler_dynamic.a 00:02:36.326 CC module/fsdev/aio/linux_aio_mgr.o 00:02:36.326 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.326 LIB libspdk_keyring_file.a 00:02:36.326 SYMLINK libspdk_keyring_linux.so 00:02:36.585 LIB libspdk_blob_bdev.a 00:02:36.585 SO libspdk_keyring_file.so.2.0 00:02:36.585 LIB libspdk_accel_error.a 00:02:36.585 SO libspdk_blob_bdev.so.11.0 00:02:36.585 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.585 SO libspdk_accel_error.so.2.0 00:02:36.585 SYMLINK libspdk_keyring_file.so 00:02:36.585 CC module/accel/ioat/accel_ioat.o 00:02:36.585 SYMLINK libspdk_blob_bdev.so 00:02:36.585 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.585 SYMLINK libspdk_accel_error.so 00:02:36.585 CC module/accel/iaa/accel_iaa.o 00:02:36.585 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.585 CC module/accel/dsa/accel_dsa.o 00:02:36.585 LIB libspdk_accel_ioat.a 00:02:36.585 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.843 SO libspdk_accel_ioat.so.6.0 00:02:36.843 CC module/bdev/delay/vbdev_delay.o 00:02:36.843 CC module/bdev/error/vbdev_error.o 00:02:36.843 CC module/bdev/gpt/gpt.o 00:02:36.843 LIB libspdk_accel_iaa.a 00:02:36.843 SO libspdk_accel_iaa.so.3.0 00:02:36.843 SYMLINK libspdk_accel_ioat.so 00:02:36.843 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.843 LIB libspdk_sock_posix.a 00:02:36.843 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.843 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.844 SYMLINK libspdk_accel_iaa.so 00:02:36.844 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.844 SO libspdk_sock_posix.so.6.0 00:02:36.844 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.844 SYMLINK libspdk_sock_posix.so 00:02:36.844 LIB libspdk_fsdev_aio.a 00:02:36.844 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.844 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.844 SO libspdk_fsdev_aio.so.1.0 00:02:36.844 LIB libspdk_accel_dsa.a 00:02:37.102 LIB libspdk_blobfs_bdev.a 00:02:37.102 SO libspdk_accel_dsa.so.5.0 00:02:37.102 SYMLINK libspdk_fsdev_aio.so 00:02:37.102 SO libspdk_blobfs_bdev.so.6.0 00:02:37.102 LIB libspdk_bdev_delay.a 00:02:37.102 CC module/bdev/malloc/bdev_malloc.o 00:02:37.102 SO libspdk_bdev_delay.so.6.0 00:02:37.102 SYMLINK libspdk_accel_dsa.so 00:02:37.102 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.102 SYMLINK libspdk_blobfs_bdev.so 00:02:37.102 LIB libspdk_bdev_error.a 00:02:37.102 CC module/bdev/null/bdev_null.o 00:02:37.102 SYMLINK libspdk_bdev_delay.so 00:02:37.102 SO libspdk_bdev_error.so.6.0 00:02:37.102 CC module/bdev/null/bdev_null_rpc.o 00:02:37.102 LIB libspdk_bdev_gpt.a 00:02:37.102 CC module/bdev/nvme/bdev_nvme.o 00:02:37.102 SO libspdk_bdev_gpt.so.6.0 00:02:37.102 SYMLINK libspdk_bdev_error.so 00:02:37.102 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.102 SYMLINK libspdk_bdev_gpt.so 00:02:37.102 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.102 LIB libspdk_bdev_lvol.a 00:02:37.102 CC module/bdev/nvme/nvme_rpc.o 00:02:37.102 SO libspdk_bdev_lvol.so.6.0 00:02:37.360 CC module/bdev/raid/bdev_raid.o 00:02:37.360 LIB libspdk_bdev_null.a 00:02:37.360 CC module/bdev/split/vbdev_split.o 00:02:37.360 SYMLINK libspdk_bdev_lvol.so 00:02:37.360 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.360 SO libspdk_bdev_null.so.6.0 00:02:37.360 LIB libspdk_bdev_malloc.a 00:02:37.360 SO libspdk_bdev_malloc.so.6.0 00:02:37.360 SYMLINK libspdk_bdev_null.so 00:02:37.360 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.360 SYMLINK libspdk_bdev_malloc.so 00:02:37.360 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.360 CC module/bdev/xnvme/bdev_xnvme.o 00:02:37.360 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.360 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.618 LIB libspdk_bdev_passthru.a 00:02:37.618 CC module/bdev/aio/bdev_aio.o 00:02:37.618 CC module/bdev/ftl/bdev_ftl.o 00:02:37.618 SO libspdk_bdev_passthru.so.6.0 00:02:37.618 LIB libspdk_bdev_zone_block.a 00:02:37.618 LIB libspdk_bdev_split.a 00:02:37.618 SO libspdk_bdev_zone_block.so.6.0 00:02:37.618 SYMLINK libspdk_bdev_passthru.so 00:02:37.618 SO libspdk_bdev_split.so.6.0 00:02:37.618 SYMLINK libspdk_bdev_zone_block.so 00:02:37.618 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.618 SYMLINK libspdk_bdev_split.so 00:02:37.618 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.618 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:37.618 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.618 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.618 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.618 CC module/bdev/raid/raid0.o 00:02:37.877 CC module/bdev/aio/bdev_aio_rpc.o 00:02:37.877 LIB libspdk_bdev_xnvme.a 00:02:37.877 SO libspdk_bdev_xnvme.so.3.0 00:02:37.877 CC module/bdev/raid/raid1.o 00:02:37.877 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.877 SYMLINK libspdk_bdev_xnvme.so 00:02:37.877 CC module/bdev/raid/concat.o 00:02:37.877 LIB libspdk_bdev_ftl.a 00:02:37.877 LIB libspdk_bdev_aio.a 00:02:37.877 SO libspdk_bdev_ftl.so.6.0 00:02:37.877 SO libspdk_bdev_aio.so.6.0 00:02:37.877 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.877 SYMLINK libspdk_bdev_ftl.so 00:02:37.877 CC module/bdev/nvme/vbdev_opal.o 00:02:37.877 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.877 SYMLINK libspdk_bdev_aio.so 00:02:37.877 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:38.135 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.135 LIB libspdk_bdev_raid.a 00:02:38.135 LIB libspdk_bdev_iscsi.a 00:02:38.135 SO libspdk_bdev_iscsi.so.6.0 00:02:38.135 SO libspdk_bdev_raid.so.6.0 00:02:38.135 SYMLINK libspdk_bdev_iscsi.so 00:02:38.135 LIB libspdk_bdev_virtio.a 00:02:38.135 SYMLINK libspdk_bdev_raid.so 00:02:38.135 SO libspdk_bdev_virtio.so.6.0 00:02:38.135 SYMLINK libspdk_bdev_virtio.so 00:02:39.070 LIB libspdk_bdev_nvme.a 00:02:39.070 SO libspdk_bdev_nvme.so.7.1 00:02:39.329 SYMLINK libspdk_bdev_nvme.so 00:02:39.588 CC module/event/subsystems/vmd/vmd.o 00:02:39.588 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.588 CC module/event/subsystems/sock/sock.o 00:02:39.588 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.588 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.588 CC module/event/subsystems/fsdev/fsdev.o 00:02:39.588 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.588 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.588 CC module/event/subsystems/keyring/keyring.o 00:02:39.846 LIB libspdk_event_sock.a 00:02:39.846 LIB libspdk_event_iobuf.a 00:02:39.846 LIB libspdk_event_vmd.a 00:02:39.846 LIB libspdk_event_fsdev.a 00:02:39.846 LIB libspdk_event_vhost_blk.a 00:02:39.846 SO libspdk_event_sock.so.5.0 00:02:39.846 LIB libspdk_event_scheduler.a 00:02:39.846 SO libspdk_event_iobuf.so.3.0 00:02:39.846 LIB libspdk_event_keyring.a 00:02:39.846 SO libspdk_event_vhost_blk.so.3.0 00:02:39.846 SO libspdk_event_fsdev.so.1.0 00:02:39.846 SO libspdk_event_scheduler.so.4.0 00:02:39.846 SO libspdk_event_vmd.so.6.0 00:02:39.846 SO libspdk_event_keyring.so.1.0 00:02:39.846 SYMLINK libspdk_event_sock.so 00:02:39.846 SYMLINK libspdk_event_fsdev.so 00:02:39.846 SYMLINK libspdk_event_vhost_blk.so 00:02:39.846 SYMLINK libspdk_event_iobuf.so 00:02:39.846 SYMLINK libspdk_event_scheduler.so 00:02:39.846 SYMLINK libspdk_event_keyring.so 00:02:39.846 SYMLINK libspdk_event_vmd.so 00:02:40.105 CC module/event/subsystems/accel/accel.o 00:02:40.105 LIB libspdk_event_accel.a 00:02:40.105 SO libspdk_event_accel.so.6.0 00:02:40.363 SYMLINK libspdk_event_accel.so 00:02:40.363 CC module/event/subsystems/bdev/bdev.o 00:02:40.622 LIB libspdk_event_bdev.a 00:02:40.622 SO libspdk_event_bdev.so.6.0 00:02:40.622 SYMLINK libspdk_event_bdev.so 00:02:40.887 CC module/event/subsystems/ublk/ublk.o 00:02:40.887 CC module/event/subsystems/nbd/nbd.o 00:02:40.887 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.887 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.887 CC module/event/subsystems/scsi/scsi.o 00:02:40.887 LIB libspdk_event_ublk.a 00:02:40.887 LIB libspdk_event_nbd.a 00:02:40.887 LIB libspdk_event_scsi.a 00:02:40.887 SO libspdk_event_ublk.so.3.0 00:02:40.887 SO libspdk_event_nbd.so.6.0 00:02:40.887 SO libspdk_event_scsi.so.6.0 00:02:41.147 SYMLINK libspdk_event_ublk.so 00:02:41.147 SYMLINK libspdk_event_nbd.so 00:02:41.147 SYMLINK libspdk_event_scsi.so 00:02:41.147 LIB libspdk_event_nvmf.a 00:02:41.147 SO libspdk_event_nvmf.so.6.0 00:02:41.147 SYMLINK libspdk_event_nvmf.so 00:02:41.147 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:41.147 CC module/event/subsystems/iscsi/iscsi.o 00:02:41.405 LIB libspdk_event_vhost_scsi.a 00:02:41.405 SO libspdk_event_vhost_scsi.so.3.0 00:02:41.405 LIB libspdk_event_iscsi.a 00:02:41.405 SO libspdk_event_iscsi.so.6.0 00:02:41.405 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.405 SYMLINK libspdk_event_iscsi.so 00:02:41.663 SO libspdk.so.6.0 00:02:41.663 SYMLINK libspdk.so 00:02:41.663 CC test/rpc_client/rpc_client_test.o 00:02:41.663 TEST_HEADER include/spdk/accel.h 00:02:41.663 TEST_HEADER include/spdk/accel_module.h 00:02:41.663 CXX app/trace/trace.o 00:02:41.663 TEST_HEADER include/spdk/assert.h 00:02:41.663 TEST_HEADER include/spdk/barrier.h 00:02:41.663 TEST_HEADER include/spdk/base64.h 00:02:41.663 TEST_HEADER include/spdk/bdev.h 00:02:41.663 TEST_HEADER include/spdk/bdev_module.h 00:02:41.663 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.663 TEST_HEADER include/spdk/bit_array.h 00:02:41.663 TEST_HEADER include/spdk/bit_pool.h 00:02:41.663 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.663 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:41.663 TEST_HEADER include/spdk/blobfs.h 00:02:41.663 TEST_HEADER include/spdk/blob.h 00:02:41.663 TEST_HEADER include/spdk/conf.h 00:02:41.663 TEST_HEADER include/spdk/config.h 00:02:41.663 TEST_HEADER include/spdk/cpuset.h 00:02:41.663 TEST_HEADER include/spdk/crc16.h 00:02:41.663 TEST_HEADER include/spdk/crc32.h 00:02:41.663 TEST_HEADER include/spdk/crc64.h 00:02:41.663 TEST_HEADER include/spdk/dif.h 00:02:41.663 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:41.663 TEST_HEADER include/spdk/dma.h 00:02:41.663 TEST_HEADER include/spdk/endian.h 00:02:41.663 TEST_HEADER include/spdk/env_dpdk.h 00:02:41.921 TEST_HEADER include/spdk/env.h 00:02:41.921 TEST_HEADER include/spdk/event.h 00:02:41.921 TEST_HEADER include/spdk/fd_group.h 00:02:41.921 TEST_HEADER include/spdk/fd.h 00:02:41.921 CC examples/util/zipf/zipf.o 00:02:41.921 TEST_HEADER include/spdk/file.h 00:02:41.921 TEST_HEADER include/spdk/fsdev.h 00:02:41.921 TEST_HEADER include/spdk/fsdev_module.h 00:02:41.921 CC test/thread/poller_perf/poller_perf.o 00:02:41.921 TEST_HEADER include/spdk/ftl.h 00:02:41.921 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:41.921 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.921 TEST_HEADER include/spdk/hexlify.h 00:02:41.921 TEST_HEADER include/spdk/histogram_data.h 00:02:41.921 TEST_HEADER include/spdk/idxd.h 00:02:41.921 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.921 TEST_HEADER include/spdk/init.h 00:02:41.921 CC examples/ioat/perf/perf.o 00:02:41.921 TEST_HEADER include/spdk/ioat.h 00:02:41.921 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.921 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.921 TEST_HEADER include/spdk/json.h 00:02:41.921 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.921 TEST_HEADER include/spdk/keyring.h 00:02:41.921 TEST_HEADER include/spdk/keyring_module.h 00:02:41.921 TEST_HEADER include/spdk/likely.h 00:02:41.921 TEST_HEADER include/spdk/log.h 00:02:41.921 TEST_HEADER include/spdk/lvol.h 00:02:41.921 TEST_HEADER include/spdk/md5.h 00:02:41.921 CC test/dma/test_dma/test_dma.o 00:02:41.921 TEST_HEADER include/spdk/memory.h 00:02:41.921 TEST_HEADER include/spdk/mmio.h 00:02:41.921 TEST_HEADER include/spdk/nbd.h 00:02:41.921 TEST_HEADER include/spdk/net.h 00:02:41.921 TEST_HEADER include/spdk/notify.h 00:02:41.921 TEST_HEADER include/spdk/nvme.h 00:02:41.921 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.921 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.921 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.921 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.921 CC test/app/bdev_svc/bdev_svc.o 00:02:41.921 TEST_HEADER include/spdk/nvme_zns.h 00:02:41.921 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.921 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:41.921 TEST_HEADER include/spdk/nvmf.h 00:02:41.921 TEST_HEADER include/spdk/nvmf_spec.h 00:02:41.921 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.921 TEST_HEADER include/spdk/opal.h 00:02:41.921 TEST_HEADER include/spdk/opal_spec.h 00:02:41.921 TEST_HEADER include/spdk/pci_ids.h 00:02:41.921 TEST_HEADER include/spdk/pipe.h 00:02:41.921 TEST_HEADER include/spdk/queue.h 00:02:41.921 TEST_HEADER include/spdk/reduce.h 00:02:41.921 TEST_HEADER include/spdk/rpc.h 00:02:41.921 TEST_HEADER include/spdk/scheduler.h 00:02:41.921 TEST_HEADER include/spdk/scsi.h 00:02:41.921 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.921 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.921 TEST_HEADER include/spdk/sock.h 00:02:41.921 TEST_HEADER include/spdk/stdinc.h 00:02:41.921 TEST_HEADER include/spdk/string.h 00:02:41.921 TEST_HEADER include/spdk/thread.h 00:02:41.921 TEST_HEADER include/spdk/trace.h 00:02:41.921 TEST_HEADER include/spdk/trace_parser.h 00:02:41.921 TEST_HEADER include/spdk/tree.h 00:02:41.921 TEST_HEADER include/spdk/ublk.h 00:02:41.921 TEST_HEADER include/spdk/util.h 00:02:41.921 LINK rpc_client_test 00:02:41.921 TEST_HEADER include/spdk/uuid.h 00:02:41.921 TEST_HEADER include/spdk/version.h 00:02:41.921 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.921 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.921 TEST_HEADER include/spdk/vhost.h 00:02:41.921 TEST_HEADER include/spdk/vmd.h 00:02:41.921 TEST_HEADER include/spdk/xor.h 00:02:41.921 TEST_HEADER include/spdk/zipf.h 00:02:41.921 CXX test/cpp_headers/accel.o 00:02:41.921 LINK zipf 00:02:41.921 LINK poller_perf 00:02:41.921 LINK interrupt_tgt 00:02:41.921 LINK bdev_svc 00:02:41.921 LINK spdk_trace 00:02:41.921 LINK ioat_perf 00:02:42.179 CXX test/cpp_headers/accel_module.o 00:02:42.179 CC app/trace_record/trace_record.o 00:02:42.179 CC app/nvmf_tgt/nvmf_main.o 00:02:42.179 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.179 CC examples/ioat/verify/verify.o 00:02:42.179 CXX test/cpp_headers/assert.o 00:02:42.179 CC app/spdk_lspci/spdk_lspci.o 00:02:42.179 CC app/spdk_tgt/spdk_tgt.o 00:02:42.180 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.180 LINK test_dma 00:02:42.439 LINK nvmf_tgt 00:02:42.439 LINK spdk_trace_record 00:02:42.439 LINK iscsi_tgt 00:02:42.439 LINK spdk_lspci 00:02:42.439 CXX test/cpp_headers/barrier.o 00:02:42.439 LINK verify 00:02:42.439 LINK mem_callbacks 00:02:42.439 LINK spdk_tgt 00:02:42.439 CXX test/cpp_headers/base64.o 00:02:42.439 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.439 CC test/env/vtophys/vtophys.o 00:02:42.439 CC app/spdk_nvme_perf/perf.o 00:02:42.439 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.697 CC app/spdk_nvme_identify/identify.o 00:02:42.697 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:42.697 CXX test/cpp_headers/bdev.o 00:02:42.697 CC examples/thread/thread/thread_ex.o 00:02:42.697 LINK vtophys 00:02:42.697 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.697 LINK nvme_fuzz 00:02:42.697 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.697 LINK env_dpdk_post_init 00:02:42.697 CXX test/cpp_headers/bdev_module.o 00:02:42.955 CC test/env/memory/memory_ut.o 00:02:42.955 LINK spdk_nvme_discover 00:02:42.955 LINK thread 00:02:42.955 CC test/env/pci/pci_ut.o 00:02:42.955 CC app/spdk_top/spdk_top.o 00:02:42.955 CXX test/cpp_headers/bdev_zone.o 00:02:43.214 LINK vhost_fuzz 00:02:43.214 CC app/vhost/vhost.o 00:02:43.214 CXX test/cpp_headers/bit_array.o 00:02:43.214 LINK spdk_nvme_perf 00:02:43.214 CC examples/sock/hello_world/hello_sock.o 00:02:43.214 LINK vhost 00:02:43.214 CXX test/cpp_headers/bit_pool.o 00:02:43.214 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.214 LINK pci_ut 00:02:43.472 CC examples/vmd/led/led.o 00:02:43.472 LINK lsvmd 00:02:43.472 LINK spdk_nvme_identify 00:02:43.472 CXX test/cpp_headers/blob_bdev.o 00:02:43.472 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.472 CXX test/cpp_headers/blobfs.o 00:02:43.472 LINK hello_sock 00:02:43.472 LINK led 00:02:43.472 CXX test/cpp_headers/blob.o 00:02:43.472 CXX test/cpp_headers/conf.o 00:02:43.731 CXX test/cpp_headers/config.o 00:02:43.731 CXX test/cpp_headers/cpuset.o 00:02:43.731 CXX test/cpp_headers/crc16.o 00:02:43.731 CC app/spdk_dd/spdk_dd.o 00:02:43.731 CXX test/cpp_headers/crc32.o 00:02:43.731 CXX test/cpp_headers/crc64.o 00:02:43.731 CC app/fio/nvme/fio_plugin.o 00:02:43.731 CXX test/cpp_headers/dif.o 00:02:43.731 CC app/fio/bdev/fio_plugin.o 00:02:43.731 CC examples/idxd/perf/perf.o 00:02:43.990 CXX test/cpp_headers/dma.o 00:02:43.990 LINK spdk_top 00:02:43.990 CXX test/cpp_headers/endian.o 00:02:43.990 CXX test/cpp_headers/env_dpdk.o 00:02:43.990 LINK memory_ut 00:02:43.990 CXX test/cpp_headers/env.o 00:02:43.990 CXX test/cpp_headers/event.o 00:02:43.990 LINK spdk_dd 00:02:44.248 CXX test/cpp_headers/fd_group.o 00:02:44.248 LINK idxd_perf 00:02:44.248 CC test/event/event_perf/event_perf.o 00:02:44.248 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:44.249 LINK iscsi_fuzz 00:02:44.249 CC test/app/histogram_perf/histogram_perf.o 00:02:44.249 CC test/nvme/aer/aer.o 00:02:44.249 CXX test/cpp_headers/fd.o 00:02:44.249 LINK event_perf 00:02:44.249 LINK spdk_bdev 00:02:44.249 LINK spdk_nvme 00:02:44.249 CC test/nvme/reset/reset.o 00:02:44.249 CC test/nvme/sgl/sgl.o 00:02:44.249 LINK histogram_perf 00:02:44.249 CXX test/cpp_headers/file.o 00:02:44.507 CXX test/cpp_headers/fsdev.o 00:02:44.507 CC test/nvme/overhead/overhead.o 00:02:44.507 LINK hello_fsdev 00:02:44.507 CC test/nvme/e2edp/nvme_dp.o 00:02:44.507 CC test/event/reactor/reactor.o 00:02:44.507 CXX test/cpp_headers/fsdev_module.o 00:02:44.507 LINK aer 00:02:44.507 CC test/app/jsoncat/jsoncat.o 00:02:44.507 CC test/nvme/err_injection/err_injection.o 00:02:44.507 LINK reset 00:02:44.507 LINK sgl 00:02:44.507 LINK reactor 00:02:44.766 CXX test/cpp_headers/ftl.o 00:02:44.766 LINK jsoncat 00:02:44.766 LINK nvme_dp 00:02:44.766 LINK err_injection 00:02:44.766 LINK overhead 00:02:44.766 CC test/nvme/startup/startup.o 00:02:44.766 CC examples/accel/perf/accel_perf.o 00:02:44.766 CC test/event/reactor_perf/reactor_perf.o 00:02:44.766 CC test/app/stub/stub.o 00:02:44.766 CXX test/cpp_headers/fuse_dispatcher.o 00:02:44.766 CXX test/cpp_headers/gpt_spec.o 00:02:44.766 CC test/accel/dif/dif.o 00:02:45.024 CC test/blobfs/mkfs/mkfs.o 00:02:45.024 LINK startup 00:02:45.024 LINK reactor_perf 00:02:45.024 CC test/event/app_repeat/app_repeat.o 00:02:45.024 CC test/lvol/esnap/esnap.o 00:02:45.024 CXX test/cpp_headers/hexlify.o 00:02:45.024 LINK stub 00:02:45.024 CC test/event/scheduler/scheduler.o 00:02:45.024 LINK mkfs 00:02:45.024 CC test/nvme/reserve/reserve.o 00:02:45.024 LINK app_repeat 00:02:45.024 CXX test/cpp_headers/histogram_data.o 00:02:45.281 CC examples/blob/hello_world/hello_blob.o 00:02:45.281 CC examples/blob/cli/blobcli.o 00:02:45.281 LINK accel_perf 00:02:45.281 CXX test/cpp_headers/idxd.o 00:02:45.281 LINK reserve 00:02:45.281 CXX test/cpp_headers/idxd_spec.o 00:02:45.281 LINK scheduler 00:02:45.281 CC examples/nvme/hello_world/hello_world.o 00:02:45.538 CXX test/cpp_headers/init.o 00:02:45.538 LINK hello_blob 00:02:45.538 LINK dif 00:02:45.538 CC examples/nvme/reconnect/reconnect.o 00:02:45.538 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.538 CC test/nvme/simple_copy/simple_copy.o 00:02:45.538 CC test/nvme/connect_stress/connect_stress.o 00:02:45.538 LINK hello_world 00:02:45.538 CXX test/cpp_headers/ioat.o 00:02:45.538 CXX test/cpp_headers/ioat_spec.o 00:02:45.804 LINK simple_copy 00:02:45.804 CXX test/cpp_headers/iscsi_spec.o 00:02:45.804 CC examples/nvme/arbitration/arbitration.o 00:02:45.804 LINK connect_stress 00:02:45.804 LINK blobcli 00:02:45.804 LINK reconnect 00:02:45.804 CXX test/cpp_headers/json.o 00:02:45.804 CC test/bdev/bdevio/bdevio.o 00:02:45.804 CC examples/bdev/hello_world/hello_bdev.o 00:02:45.804 CC examples/nvme/hotplug/hotplug.o 00:02:45.804 CXX test/cpp_headers/jsonrpc.o 00:02:45.804 CC test/nvme/boot_partition/boot_partition.o 00:02:46.076 LINK arbitration 00:02:46.076 CXX test/cpp_headers/keyring.o 00:02:46.076 LINK nvme_manage 00:02:46.076 CC test/nvme/compliance/nvme_compliance.o 00:02:46.076 LINK boot_partition 00:02:46.076 CXX test/cpp_headers/keyring_module.o 00:02:46.076 LINK hotplug 00:02:46.076 CXX test/cpp_headers/likely.o 00:02:46.076 CC test/nvme/fused_ordering/fused_ordering.o 00:02:46.076 LINK hello_bdev 00:02:46.076 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:46.076 CXX test/cpp_headers/log.o 00:02:46.076 CXX test/cpp_headers/lvol.o 00:02:46.336 CXX test/cpp_headers/md5.o 00:02:46.336 LINK bdevio 00:02:46.336 CXX test/cpp_headers/memory.o 00:02:46.336 LINK fused_ordering 00:02:46.336 CXX test/cpp_headers/mmio.o 00:02:46.336 LINK nvme_compliance 00:02:46.336 LINK cmb_copy 00:02:46.336 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.336 CXX test/cpp_headers/nbd.o 00:02:46.336 CXX test/cpp_headers/net.o 00:02:46.336 CXX test/cpp_headers/notify.o 00:02:46.336 CXX test/cpp_headers/nvme.o 00:02:46.336 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:46.336 CC test/nvme/fdp/fdp.o 00:02:46.598 CC test/nvme/cuse/cuse.o 00:02:46.598 CC examples/nvme/abort/abort.o 00:02:46.598 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.598 CXX test/cpp_headers/nvme_intel.o 00:02:46.598 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.598 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.598 LINK pmr_persistence 00:02:46.598 LINK doorbell_aers 00:02:46.598 CXX test/cpp_headers/nvme_spec.o 00:02:46.857 CXX test/cpp_headers/nvme_zns.o 00:02:46.857 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.857 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.857 LINK fdp 00:02:46.857 CXX test/cpp_headers/nvmf.o 00:02:46.857 LINK abort 00:02:46.857 CXX test/cpp_headers/nvmf_spec.o 00:02:46.857 CXX test/cpp_headers/nvmf_transport.o 00:02:46.857 CXX test/cpp_headers/opal.o 00:02:46.857 CXX test/cpp_headers/opal_spec.o 00:02:46.857 CXX test/cpp_headers/pci_ids.o 00:02:46.857 CXX test/cpp_headers/pipe.o 00:02:46.857 CXX test/cpp_headers/queue.o 00:02:47.115 CXX test/cpp_headers/reduce.o 00:02:47.115 CXX test/cpp_headers/rpc.o 00:02:47.115 CXX test/cpp_headers/scheduler.o 00:02:47.115 CXX test/cpp_headers/scsi.o 00:02:47.115 CXX test/cpp_headers/scsi_spec.o 00:02:47.115 CXX test/cpp_headers/sock.o 00:02:47.115 CXX test/cpp_headers/stdinc.o 00:02:47.115 CXX test/cpp_headers/string.o 00:02:47.115 CXX test/cpp_headers/thread.o 00:02:47.115 LINK bdevperf 00:02:47.115 CXX test/cpp_headers/trace.o 00:02:47.115 CXX test/cpp_headers/trace_parser.o 00:02:47.115 CXX test/cpp_headers/tree.o 00:02:47.115 CXX test/cpp_headers/ublk.o 00:02:47.115 CXX test/cpp_headers/util.o 00:02:47.373 CXX test/cpp_headers/uuid.o 00:02:47.373 CXX test/cpp_headers/version.o 00:02:47.373 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.373 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.373 CXX test/cpp_headers/vhost.o 00:02:47.373 CXX test/cpp_headers/vmd.o 00:02:47.373 CXX test/cpp_headers/xor.o 00:02:47.373 CXX test/cpp_headers/zipf.o 00:02:47.373 CC examples/nvmf/nvmf/nvmf.o 00:02:47.631 LINK nvmf 00:02:47.631 LINK cuse 00:02:50.160 LINK esnap 00:02:50.160 00:02:50.160 real 1m9.157s 00:02:50.160 user 6m6.587s 00:02:50.160 sys 1m3.719s 00:02:50.160 ************************************ 00:02:50.160 END TEST make 00:02:50.160 ************************************ 00:02:50.160 01:22:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:50.160 01:22:58 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.160 01:22:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.160 01:22:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.160 01:22:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.160 01:22:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.160 01:22:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.160 01:22:58 -- pm/common@44 -- $ pid=5067 00:02:50.160 01:22:58 -- pm/common@50 -- $ kill -TERM 5067 00:02:50.160 01:22:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.160 01:22:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.160 01:22:58 -- pm/common@44 -- $ pid=5069 00:02:50.160 01:22:58 -- pm/common@50 -- $ kill -TERM 5069 00:02:50.160 01:22:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:50.160 01:22:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.419 01:22:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:50.419 01:22:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:50.419 01:22:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:50.419 01:22:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:50.419 01:22:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.419 01:22:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.419 01:22:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.419 01:22:58 -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.419 01:22:58 -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.419 01:22:58 -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.419 01:22:58 -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.419 01:22:58 -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.419 01:22:58 -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.419 01:22:58 -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.419 01:22:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.419 01:22:58 -- scripts/common.sh@344 -- # case "$op" in 00:02:50.419 01:22:58 -- scripts/common.sh@345 -- # : 1 00:02:50.419 01:22:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.419 01:22:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.419 01:22:58 -- scripts/common.sh@365 -- # decimal 1 00:02:50.419 01:22:58 -- scripts/common.sh@353 -- # local d=1 00:02:50.419 01:22:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.419 01:22:58 -- scripts/common.sh@355 -- # echo 1 00:02:50.419 01:22:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.419 01:22:58 -- scripts/common.sh@366 -- # decimal 2 00:02:50.419 01:22:58 -- scripts/common.sh@353 -- # local d=2 00:02:50.419 01:22:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.419 01:22:58 -- scripts/common.sh@355 -- # echo 2 00:02:50.419 01:22:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.419 01:22:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.419 01:22:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.420 01:22:58 -- scripts/common.sh@368 -- # return 0 00:02:50.420 01:22:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.420 01:22:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.420 --rc genhtml_branch_coverage=1 00:02:50.420 --rc genhtml_function_coverage=1 00:02:50.420 --rc genhtml_legend=1 00:02:50.420 --rc geninfo_all_blocks=1 00:02:50.420 --rc geninfo_unexecuted_blocks=1 00:02:50.420 00:02:50.420 ' 00:02:50.420 01:22:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.420 --rc genhtml_branch_coverage=1 00:02:50.420 --rc genhtml_function_coverage=1 00:02:50.420 --rc genhtml_legend=1 00:02:50.420 --rc geninfo_all_blocks=1 00:02:50.420 --rc geninfo_unexecuted_blocks=1 00:02:50.420 00:02:50.420 ' 00:02:50.420 01:22:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.420 --rc genhtml_branch_coverage=1 00:02:50.420 --rc genhtml_function_coverage=1 00:02:50.420 --rc genhtml_legend=1 00:02:50.420 --rc geninfo_all_blocks=1 00:02:50.420 --rc geninfo_unexecuted_blocks=1 00:02:50.420 00:02:50.420 ' 00:02:50.420 01:22:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:50.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.420 --rc genhtml_branch_coverage=1 00:02:50.420 --rc genhtml_function_coverage=1 00:02:50.420 --rc genhtml_legend=1 00:02:50.420 --rc geninfo_all_blocks=1 00:02:50.420 --rc geninfo_unexecuted_blocks=1 00:02:50.420 00:02:50.420 ' 00:02:50.420 01:22:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:50.420 01:22:58 -- nvmf/common.sh@7 -- # uname -s 00:02:50.420 01:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.420 01:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.420 01:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.420 01:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.420 01:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.420 01:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.420 01:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.420 01:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.420 01:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.420 01:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.420 01:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6bcd302-db51-499b-b3da-54e4b86a5713 00:02:50.420 01:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=b6bcd302-db51-499b-b3da-54e4b86a5713 00:02:50.420 01:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.420 01:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.420 01:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:50.420 01:22:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.420 01:22:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:50.420 01:22:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:50.420 01:22:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.420 01:22:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.420 01:22:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.420 01:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.420 01:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.420 01:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.420 01:22:58 -- paths/export.sh@5 -- # export PATH 00:02:50.420 01:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.420 01:22:58 -- nvmf/common.sh@51 -- # : 0 00:02:50.420 01:22:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:50.420 01:22:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:50.420 01:22:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.420 01:22:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.420 01:22:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.420 01:22:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:50.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:50.420 01:22:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:50.420 01:22:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:50.420 01:22:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:50.420 01:22:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.420 01:22:58 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.420 01:22:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.420 01:22:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.420 01:22:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:50.420 01:22:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.420 01:22:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:50.420 01:22:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.420 01:22:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.420 01:22:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.420 01:22:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54272 00:02:50.420 01:22:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.420 01:22:58 -- pm/common@17 -- # local monitor 00:02:50.420 01:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.420 01:22:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.420 01:22:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.420 01:22:58 -- pm/common@25 -- # sleep 1 00:02:50.420 01:22:58 -- pm/common@21 -- # date +%s 00:02:50.420 01:22:58 -- pm/common@21 -- # date +%s 00:02:50.420 01:22:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806578 00:02:50.420 01:22:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806578 00:02:50.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806578_collect-vmstat.pm.log 00:02:50.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806578_collect-cpu-load.pm.log 00:02:51.353 01:22:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.353 01:22:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.353 01:22:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.353 01:22:59 -- common/autotest_common.sh@10 -- # set +x 00:02:51.353 01:22:59 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.353 01:22:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:51.353 01:22:59 -- common/autotest_common.sh@10 -- # set +x 00:02:51.612 01:22:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:51.612 01:22:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:51.612 01:22:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:51.612 01:22:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:51.612 01:22:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:51.612 01:22:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.612 01:22:59 -- common/autotest_common.sh@1457 -- # uname 00:02:51.612 01:22:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:51.612 01:22:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.612 01:22:59 -- common/autotest_common.sh@1477 -- # uname 00:02:51.612 01:22:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:51.612 01:22:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:51.612 01:22:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:51.612 lcov: LCOV version 1.15 00:02:51.612 01:22:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:06.496 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:06.497 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:21.393 01:23:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:21.393 01:23:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.393 01:23:29 -- common/autotest_common.sh@10 -- # set +x 00:03:21.393 01:23:29 -- spdk/autotest.sh@78 -- # rm -f 00:03:21.393 01:23:29 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:21.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:21.653 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:21.653 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:21.915 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:21.915 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:21.915 01:23:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:21.915 01:23:30 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:21.915 01:23:30 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:21.915 01:23:30 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:21.915 01:23:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:03:21.915 01:23:30 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:03:21.915 01:23:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:21.915 01:23:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.915 01:23:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:21.915 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.915 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:21.915 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:21.915 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:21.915 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:21.915 No valid GPT data, bailing 00:03:21.915 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.915 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:21.915 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:21.915 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:21.915 1+0 records in 00:03:21.915 1+0 records out 00:03:21.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298382 s, 35.1 MB/s 00:03:21.915 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.915 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:21.915 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:21.915 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:21.915 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:21.915 No valid GPT data, bailing 00:03:21.915 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:21.915 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:21.915 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:21.915 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:21.915 1+0 records in 00:03:21.915 1+0 records out 00:03:21.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00576009 s, 182 MB/s 00:03:22.177 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.177 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.177 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:22.177 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:22.177 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:22.177 No valid GPT data, bailing 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:22.177 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:22.177 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:22.177 1+0 records in 00:03:22.177 1+0 records out 00:03:22.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446618 s, 235 MB/s 00:03:22.177 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.177 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.177 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:22.177 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:22.177 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:22.177 No valid GPT data, bailing 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:22.177 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:22.177 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:22.177 1+0 records in 00:03:22.177 1+0 records out 00:03:22.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473187 s, 222 MB/s 00:03:22.177 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.177 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.177 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:03:22.177 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:03:22.177 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:22.177 No valid GPT data, bailing 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:22.177 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:22.177 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:22.177 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:22.177 1+0 records in 00:03:22.177 1+0 records out 00:03:22.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495738 s, 212 MB/s 00:03:22.177 01:23:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.177 01:23:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.177 01:23:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:03:22.177 01:23:30 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:03:22.177 01:23:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:22.439 No valid GPT data, bailing 00:03:22.439 01:23:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:22.439 01:23:30 -- scripts/common.sh@394 -- # pt= 00:03:22.439 01:23:30 -- scripts/common.sh@395 -- # return 1 00:03:22.439 01:23:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:22.439 1+0 records in 00:03:22.439 1+0 records out 00:03:22.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623106 s, 168 MB/s 00:03:22.439 01:23:30 -- spdk/autotest.sh@105 -- # sync 00:03:22.439 01:23:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.439 01:23:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.439 01:23:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:24.358 01:23:32 -- spdk/autotest.sh@111 -- # uname -s 00:03:24.358 01:23:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:24.358 01:23:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:24.358 01:23:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:24.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:24.881 Hugepages 00:03:24.881 node hugesize free / total 00:03:24.881 node0 1048576kB 0 / 0 00:03:24.881 node0 2048kB 0 / 0 00:03:24.881 00:03:24.881 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.143 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:25.143 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:25.143 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:25.143 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:25.405 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:25.405 01:23:33 -- spdk/autotest.sh@117 -- # uname -s 00:03:25.405 01:23:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:25.405 01:23:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:25.405 01:23:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:26.240 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:26.240 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:26.549 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:26.549 01:23:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:27.512 01:23:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:27.512 01:23:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:27.512 01:23:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:27.512 01:23:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:27.512 01:23:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:27.512 01:23:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:27.512 01:23:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.512 01:23:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:27.512 01:23:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:27.512 01:23:35 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:27.512 01:23:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:27.512 01:23:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:28.035 Waiting for block devices as requested 00:03:28.035 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:28.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:28.296 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:28.296 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.594 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:33.594 01:23:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.594 01:23:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1543 -- # continue 00:03:33.594 01:23:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.594 01:23:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1543 -- # continue 00:03:33.594 01:23:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.594 01:23:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1543 -- # continue 00:03:33.594 01:23:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.594 01:23:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.594 01:23:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.594 01:23:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.594 01:23:41 -- common/autotest_common.sh@1543 -- # continue 00:03:33.594 01:23:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:33.594 01:23:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.594 01:23:41 -- common/autotest_common.sh@10 -- # set +x 00:03:33.594 01:23:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:33.594 01:23:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.594 01:23:41 -- common/autotest_common.sh@10 -- # set +x 00:03:33.594 01:23:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.739 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.739 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.739 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.739 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.739 01:23:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.739 01:23:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.739 01:23:43 -- common/autotest_common.sh@10 -- # set +x 00:03:34.739 01:23:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.739 01:23:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:34.739 01:23:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.739 01:23:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:34.739 01:23:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:34.739 01:23:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:34.739 01:23:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.739 01:23:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:34.739 01:23:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.739 01:23:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.739 01:23:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.739 01:23:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:34.739 01:23:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.739 01:23:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:34.739 01:23:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:34.740 01:23:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:34.740 01:23:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:34.740 01:23:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:34.740 01:23:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:34.740 01:23:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:34.740 01:23:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:35.002 01:23:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:35.002 01:23:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:35.002 01:23:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:35.002 01:23:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:35.002 01:23:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:35.002 01:23:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:35.002 01:23:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:35.002 01:23:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:35.002 01:23:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:35.002 01:23:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:35.002 01:23:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:35.002 01:23:43 -- common/autotest_common.sh@1572 -- # return 0 00:03:35.002 01:23:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:35.002 01:23:43 -- common/autotest_common.sh@1580 -- # return 0 00:03:35.002 01:23:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:35.002 01:23:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:35.002 01:23:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.002 01:23:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.002 01:23:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:35.002 01:23:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.002 01:23:43 -- common/autotest_common.sh@10 -- # set +x 00:03:35.002 01:23:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:35.002 01:23:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:35.002 01:23:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.002 01:23:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.002 01:23:43 -- common/autotest_common.sh@10 -- # set +x 00:03:35.002 ************************************ 00:03:35.002 START TEST env 00:03:35.002 ************************************ 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:35.002 * Looking for test storage... 00:03:35.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.002 01:23:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.002 01:23:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.002 01:23:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.002 01:23:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.002 01:23:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.002 01:23:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.002 01:23:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.002 01:23:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.002 01:23:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.002 01:23:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.002 01:23:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.002 01:23:43 env -- scripts/common.sh@344 -- # case "$op" in 00:03:35.002 01:23:43 env -- scripts/common.sh@345 -- # : 1 00:03:35.002 01:23:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.002 01:23:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.002 01:23:43 env -- scripts/common.sh@365 -- # decimal 1 00:03:35.002 01:23:43 env -- scripts/common.sh@353 -- # local d=1 00:03:35.002 01:23:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.002 01:23:43 env -- scripts/common.sh@355 -- # echo 1 00:03:35.002 01:23:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.002 01:23:43 env -- scripts/common.sh@366 -- # decimal 2 00:03:35.002 01:23:43 env -- scripts/common.sh@353 -- # local d=2 00:03:35.002 01:23:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.002 01:23:43 env -- scripts/common.sh@355 -- # echo 2 00:03:35.002 01:23:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.002 01:23:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.002 01:23:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.002 01:23:43 env -- scripts/common.sh@368 -- # return 0 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.002 --rc genhtml_branch_coverage=1 00:03:35.002 --rc genhtml_function_coverage=1 00:03:35.002 --rc genhtml_legend=1 00:03:35.002 --rc geninfo_all_blocks=1 00:03:35.002 --rc geninfo_unexecuted_blocks=1 00:03:35.002 00:03:35.002 ' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.002 --rc genhtml_branch_coverage=1 00:03:35.002 --rc genhtml_function_coverage=1 00:03:35.002 --rc genhtml_legend=1 00:03:35.002 --rc geninfo_all_blocks=1 00:03:35.002 --rc geninfo_unexecuted_blocks=1 00:03:35.002 00:03:35.002 ' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.002 --rc genhtml_branch_coverage=1 00:03:35.002 --rc genhtml_function_coverage=1 00:03:35.002 --rc genhtml_legend=1 00:03:35.002 --rc geninfo_all_blocks=1 00:03:35.002 --rc geninfo_unexecuted_blocks=1 00:03:35.002 00:03:35.002 ' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.002 --rc genhtml_branch_coverage=1 00:03:35.002 --rc genhtml_function_coverage=1 00:03:35.002 --rc genhtml_legend=1 00:03:35.002 --rc geninfo_all_blocks=1 00:03:35.002 --rc geninfo_unexecuted_blocks=1 00:03:35.002 00:03:35.002 ' 00:03:35.002 01:23:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.002 01:23:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.002 01:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.002 ************************************ 00:03:35.002 START TEST env_memory 00:03:35.002 ************************************ 00:03:35.002 01:23:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:35.002 00:03:35.002 00:03:35.002 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.002 http://cunit.sourceforge.net/ 00:03:35.002 00:03:35.002 00:03:35.002 Suite: memory 00:03:35.264 Test: alloc and free memory map ...[2024-11-17 01:23:43.465660] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:35.264 passed 00:03:35.264 Test: mem map translation ...[2024-11-17 01:23:43.505492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:35.264 [2024-11-17 01:23:43.505564] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:35.264 [2024-11-17 01:23:43.505627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:35.264 [2024-11-17 01:23:43.505642] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:35.265 passed 00:03:35.265 Test: mem map registration ...[2024-11-17 01:23:43.574405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:35.265 [2024-11-17 01:23:43.574457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:35.265 passed 00:03:35.265 Test: mem map adjacent registrations ...passed 00:03:35.265 00:03:35.265 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.265 suites 1 1 n/a 0 0 00:03:35.265 tests 4 4 4 0 0 00:03:35.265 asserts 152 152 152 0 n/a 00:03:35.265 00:03:35.265 Elapsed time = 0.235 seconds 00:03:35.265 00:03:35.265 real 0m0.270s 00:03:35.265 user 0m0.241s 00:03:35.265 sys 0m0.021s 00:03:35.265 01:23:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.265 ************************************ 00:03:35.265 END TEST env_memory 00:03:35.265 ************************************ 00:03:35.265 01:23:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:35.527 01:23:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:35.527 01:23:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.527 01:23:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.527 01:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.527 ************************************ 00:03:35.527 START TEST env_vtophys 00:03:35.527 ************************************ 00:03:35.527 01:23:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:35.527 EAL: lib.eal log level changed from notice to debug 00:03:35.527 EAL: Detected lcore 0 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 1 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 2 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 3 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 4 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 5 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 6 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 7 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 8 as core 0 on socket 0 00:03:35.527 EAL: Detected lcore 9 as core 0 on socket 0 00:03:35.527 EAL: Maximum logical cores by configuration: 128 00:03:35.527 EAL: Detected CPU lcores: 10 00:03:35.527 EAL: Detected NUMA nodes: 1 00:03:35.527 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:35.527 EAL: Detected shared linkage of DPDK 00:03:35.527 EAL: No shared files mode enabled, IPC will be disabled 00:03:35.527 EAL: Selected IOVA mode 'PA' 00:03:35.527 EAL: Probing VFIO support... 00:03:35.527 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:35.527 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:35.527 EAL: Ask a virtual area of 0x2e000 bytes 00:03:35.527 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:35.527 EAL: Setting up physically contiguous memory... 00:03:35.527 EAL: Setting maximum number of open files to 524288 00:03:35.527 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:35.527 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:35.527 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.527 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:35.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.527 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.527 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:35.527 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:35.527 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.527 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:35.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.527 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.527 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:35.527 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:35.527 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.527 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:35.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.527 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.527 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:35.527 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:35.527 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.527 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:35.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.527 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.527 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:35.527 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:35.527 EAL: Hugepages will be freed exactly as allocated. 00:03:35.527 EAL: No shared files mode enabled, IPC is disabled 00:03:35.527 EAL: No shared files mode enabled, IPC is disabled 00:03:35.527 EAL: TSC frequency is ~2600000 KHz 00:03:35.527 EAL: Main lcore 0 is ready (tid=7f96f8340a40;cpuset=[0]) 00:03:35.527 EAL: Trying to obtain current memory policy. 00:03:35.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.527 EAL: Restoring previous memory policy: 0 00:03:35.527 EAL: request: mp_malloc_sync 00:03:35.527 EAL: No shared files mode enabled, IPC is disabled 00:03:35.527 EAL: Heap on socket 0 was expanded by 2MB 00:03:35.527 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:35.527 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:35.527 EAL: Mem event callback 'spdk:(nil)' registered 00:03:35.527 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:35.527 00:03:35.528 00:03:35.528 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.528 http://cunit.sourceforge.net/ 00:03:35.528 00:03:35.528 00:03:35.528 Suite: components_suite 00:03:36.101 Test: vtophys_malloc_test ...passed 00:03:36.101 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:36.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.101 EAL: Restoring previous memory policy: 4 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was expanded by 4MB 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was shrunk by 4MB 00:03:36.101 EAL: Trying to obtain current memory policy. 00:03:36.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.101 EAL: Restoring previous memory policy: 4 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was expanded by 6MB 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was shrunk by 6MB 00:03:36.101 EAL: Trying to obtain current memory policy. 00:03:36.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.101 EAL: Restoring previous memory policy: 4 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was expanded by 10MB 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was shrunk by 10MB 00:03:36.101 EAL: Trying to obtain current memory policy. 00:03:36.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.101 EAL: Restoring previous memory policy: 4 00:03:36.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.101 EAL: request: mp_malloc_sync 00:03:36.101 EAL: No shared files mode enabled, IPC is disabled 00:03:36.101 EAL: Heap on socket 0 was expanded by 18MB 00:03:36.102 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.102 EAL: request: mp_malloc_sync 00:03:36.102 EAL: No shared files mode enabled, IPC is disabled 00:03:36.102 EAL: Heap on socket 0 was shrunk by 18MB 00:03:36.102 EAL: Trying to obtain current memory policy. 00:03:36.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.102 EAL: Restoring previous memory policy: 4 00:03:36.102 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.102 EAL: request: mp_malloc_sync 00:03:36.102 EAL: No shared files mode enabled, IPC is disabled 00:03:36.102 EAL: Heap on socket 0 was expanded by 34MB 00:03:36.102 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.102 EAL: request: mp_malloc_sync 00:03:36.102 EAL: No shared files mode enabled, IPC is disabled 00:03:36.102 EAL: Heap on socket 0 was shrunk by 34MB 00:03:36.102 EAL: Trying to obtain current memory policy. 00:03:36.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.102 EAL: Restoring previous memory policy: 4 00:03:36.102 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.102 EAL: request: mp_malloc_sync 00:03:36.102 EAL: No shared files mode enabled, IPC is disabled 00:03:36.102 EAL: Heap on socket 0 was expanded by 66MB 00:03:36.364 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.364 EAL: request: mp_malloc_sync 00:03:36.364 EAL: No shared files mode enabled, IPC is disabled 00:03:36.364 EAL: Heap on socket 0 was shrunk by 66MB 00:03:36.364 EAL: Trying to obtain current memory policy. 00:03:36.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.364 EAL: Restoring previous memory policy: 4 00:03:36.364 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.364 EAL: request: mp_malloc_sync 00:03:36.364 EAL: No shared files mode enabled, IPC is disabled 00:03:36.364 EAL: Heap on socket 0 was expanded by 130MB 00:03:36.626 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.626 EAL: request: mp_malloc_sync 00:03:36.626 EAL: No shared files mode enabled, IPC is disabled 00:03:36.626 EAL: Heap on socket 0 was shrunk by 130MB 00:03:36.626 EAL: Trying to obtain current memory policy. 00:03:36.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.626 EAL: Restoring previous memory policy: 4 00:03:36.626 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.626 EAL: request: mp_malloc_sync 00:03:36.626 EAL: No shared files mode enabled, IPC is disabled 00:03:36.626 EAL: Heap on socket 0 was expanded by 258MB 00:03:36.888 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.149 EAL: request: mp_malloc_sync 00:03:37.149 EAL: No shared files mode enabled, IPC is disabled 00:03:37.149 EAL: Heap on socket 0 was shrunk by 258MB 00:03:37.410 EAL: Trying to obtain current memory policy. 00:03:37.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:37.410 EAL: Restoring previous memory policy: 4 00:03:37.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.410 EAL: request: mp_malloc_sync 00:03:37.410 EAL: No shared files mode enabled, IPC is disabled 00:03:37.410 EAL: Heap on socket 0 was expanded by 514MB 00:03:37.984 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.984 EAL: request: mp_malloc_sync 00:03:37.984 EAL: No shared files mode enabled, IPC is disabled 00:03:37.984 EAL: Heap on socket 0 was shrunk by 514MB 00:03:38.556 EAL: Trying to obtain current memory policy. 00:03:38.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.556 EAL: Restoring previous memory policy: 4 00:03:38.556 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.556 EAL: request: mp_malloc_sync 00:03:38.556 EAL: No shared files mode enabled, IPC is disabled 00:03:38.556 EAL: Heap on socket 0 was expanded by 1026MB 00:03:39.498 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.498 EAL: request: mp_malloc_sync 00:03:39.498 EAL: No shared files mode enabled, IPC is disabled 00:03:39.498 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.440 passed 00:03:40.440 00:03:40.440 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.440 suites 1 1 n/a 0 0 00:03:40.440 tests 2 2 2 0 0 00:03:40.440 asserts 5810 5810 5810 0 n/a 00:03:40.440 00:03:40.440 Elapsed time = 4.698 seconds 00:03:40.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.440 EAL: request: mp_malloc_sync 00:03:40.440 EAL: No shared files mode enabled, IPC is disabled 00:03:40.440 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.440 EAL: No shared files mode enabled, IPC is disabled 00:03:40.440 EAL: No shared files mode enabled, IPC is disabled 00:03:40.440 EAL: No shared files mode enabled, IPC is disabled 00:03:40.440 00:03:40.440 real 0m4.972s 00:03:40.440 user 0m4.053s 00:03:40.440 sys 0m0.767s 00:03:40.440 01:23:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.440 ************************************ 00:03:40.440 END TEST env_vtophys 00:03:40.440 01:23:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.440 ************************************ 00:03:40.440 01:23:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:40.440 01:23:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.440 01:23:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.440 01:23:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.440 ************************************ 00:03:40.440 START TEST env_pci 00:03:40.440 ************************************ 00:03:40.440 01:23:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:40.440 00:03:40.440 00:03:40.440 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.440 http://cunit.sourceforge.net/ 00:03:40.440 00:03:40.440 00:03:40.440 Suite: pci 00:03:40.440 Test: pci_hook ...[2024-11-17 01:23:48.806234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57056 has claimed it 00:03:40.440 passed 00:03:40.440 00:03:40.440 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.440 suites 1 1 n/a 0 0 00:03:40.440 tests 1 1 1 0 0 00:03:40.440 asserts 25 25 25 0 n/a 00:03:40.440 00:03:40.440 Elapsed time = 0.007 seconds 00:03:40.440 EAL: Cannot find device (10000:00:01.0) 00:03:40.440 EAL: Failed to attach device on primary process 00:03:40.440 00:03:40.440 real 0m0.055s 00:03:40.440 user 0m0.029s 00:03:40.440 sys 0m0.026s 00:03:40.440 01:23:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.440 01:23:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:40.440 ************************************ 00:03:40.440 END TEST env_pci 00:03:40.440 ************************************ 00:03:40.440 01:23:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:40.440 01:23:48 env -- env/env.sh@15 -- # uname 00:03:40.440 01:23:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:40.440 01:23:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:40.440 01:23:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.440 01:23:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:40.440 01:23:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.440 01:23:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.440 ************************************ 00:03:40.440 START TEST env_dpdk_post_init 00:03:40.440 ************************************ 00:03:40.440 01:23:48 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.701 EAL: Detected CPU lcores: 10 00:03:40.702 EAL: Detected NUMA nodes: 1 00:03:40.702 EAL: Detected shared linkage of DPDK 00:03:40.702 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.702 EAL: Selected IOVA mode 'PA' 00:03:40.702 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.702 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:40.702 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:40.702 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:40.702 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:40.702 Starting DPDK initialization... 00:03:40.702 Starting SPDK post initialization... 00:03:40.702 SPDK NVMe probe 00:03:40.702 Attaching to 0000:00:10.0 00:03:40.702 Attaching to 0000:00:11.0 00:03:40.702 Attaching to 0000:00:12.0 00:03:40.702 Attaching to 0000:00:13.0 00:03:40.702 Attached to 0000:00:10.0 00:03:40.702 Attached to 0000:00:11.0 00:03:40.702 Attached to 0000:00:13.0 00:03:40.702 Attached to 0000:00:12.0 00:03:40.702 Cleaning up... 00:03:40.702 00:03:40.702 real 0m0.233s 00:03:40.702 user 0m0.075s 00:03:40.702 sys 0m0.062s 00:03:40.702 01:23:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.702 01:23:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:40.702 ************************************ 00:03:40.702 END TEST env_dpdk_post_init 00:03:40.702 ************************************ 00:03:40.961 01:23:49 env -- env/env.sh@26 -- # uname 00:03:40.961 01:23:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:40.961 01:23:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.961 01:23:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.961 01:23:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.961 01:23:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.961 ************************************ 00:03:40.961 START TEST env_mem_callbacks 00:03:40.961 ************************************ 00:03:40.962 01:23:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.962 EAL: Detected CPU lcores: 10 00:03:40.962 EAL: Detected NUMA nodes: 1 00:03:40.962 EAL: Detected shared linkage of DPDK 00:03:40.962 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.962 EAL: Selected IOVA mode 'PA' 00:03:40.962 00:03:40.962 00:03:40.962 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.962 http://cunit.sourceforge.net/ 00:03:40.962 00:03:40.962 00:03:40.962 Suite: memory 00:03:40.962 Test: test ... 00:03:40.962 register 0x200000200000 2097152 00:03:40.962 malloc 3145728 00:03:40.962 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.962 register 0x200000400000 4194304 00:03:40.962 buf 0x2000004fffc0 len 3145728 PASSED 00:03:40.962 malloc 64 00:03:40.962 buf 0x2000004ffec0 len 64 PASSED 00:03:40.962 malloc 4194304 00:03:40.962 register 0x200000800000 6291456 00:03:40.962 buf 0x2000009fffc0 len 4194304 PASSED 00:03:40.962 free 0x2000004fffc0 3145728 00:03:40.962 free 0x2000004ffec0 64 00:03:40.962 unregister 0x200000400000 4194304 PASSED 00:03:40.962 free 0x2000009fffc0 4194304 00:03:40.962 unregister 0x200000800000 6291456 PASSED 00:03:40.962 malloc 8388608 00:03:40.962 register 0x200000400000 10485760 00:03:40.962 buf 0x2000005fffc0 len 8388608 PASSED 00:03:40.962 free 0x2000005fffc0 8388608 00:03:40.962 unregister 0x200000400000 10485760 PASSED 00:03:40.962 passed 00:03:40.962 00:03:40.962 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.962 suites 1 1 n/a 0 0 00:03:40.962 tests 1 1 1 0 0 00:03:40.962 asserts 15 15 15 0 n/a 00:03:40.962 00:03:40.962 Elapsed time = 0.051 seconds 00:03:41.222 00:03:41.222 real 0m0.226s 00:03:41.222 user 0m0.076s 00:03:41.222 sys 0m0.048s 00:03:41.222 01:23:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.222 ************************************ 00:03:41.222 END TEST env_mem_callbacks 00:03:41.222 ************************************ 00:03:41.222 01:23:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.222 00:03:41.222 real 0m6.234s 00:03:41.222 user 0m4.631s 00:03:41.222 sys 0m1.151s 00:03:41.222 01:23:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.222 ************************************ 00:03:41.222 END TEST env 00:03:41.222 ************************************ 00:03:41.222 01:23:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.222 01:23:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:41.222 01:23:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.222 01:23:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.222 01:23:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.222 ************************************ 00:03:41.222 START TEST rpc 00:03:41.222 ************************************ 00:03:41.222 01:23:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:41.222 * Looking for test storage... 00:03:41.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:41.222 01:23:49 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:41.222 01:23:49 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:41.222 01:23:49 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:41.222 01:23:49 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.223 01:23:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.223 01:23:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.223 01:23:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.223 01:23:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.223 01:23:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.223 01:23:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:41.223 01:23:49 rpc -- scripts/common.sh@345 -- # : 1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.223 01:23:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.223 01:23:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@353 -- # local d=1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.223 01:23:49 rpc -- scripts/common.sh@355 -- # echo 1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.223 01:23:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@353 -- # local d=2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.223 01:23:49 rpc -- scripts/common.sh@355 -- # echo 2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.223 01:23:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.223 01:23:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.223 01:23:49 rpc -- scripts/common.sh@368 -- # return 0 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.223 --rc genhtml_branch_coverage=1 00:03:41.223 --rc genhtml_function_coverage=1 00:03:41.223 --rc genhtml_legend=1 00:03:41.223 --rc geninfo_all_blocks=1 00:03:41.223 --rc geninfo_unexecuted_blocks=1 00:03:41.223 00:03:41.223 ' 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.223 --rc genhtml_branch_coverage=1 00:03:41.223 --rc genhtml_function_coverage=1 00:03:41.223 --rc genhtml_legend=1 00:03:41.223 --rc geninfo_all_blocks=1 00:03:41.223 --rc geninfo_unexecuted_blocks=1 00:03:41.223 00:03:41.223 ' 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.223 --rc genhtml_branch_coverage=1 00:03:41.223 --rc genhtml_function_coverage=1 00:03:41.223 --rc genhtml_legend=1 00:03:41.223 --rc geninfo_all_blocks=1 00:03:41.223 --rc geninfo_unexecuted_blocks=1 00:03:41.223 00:03:41.223 ' 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:41.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.223 --rc genhtml_branch_coverage=1 00:03:41.223 --rc genhtml_function_coverage=1 00:03:41.223 --rc genhtml_legend=1 00:03:41.223 --rc geninfo_all_blocks=1 00:03:41.223 --rc geninfo_unexecuted_blocks=1 00:03:41.223 00:03:41.223 ' 00:03:41.223 01:23:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57183 00:03:41.223 01:23:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.223 01:23:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57183 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 57183 ']' 00:03:41.223 01:23:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.223 01:23:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.484 [2024-11-17 01:23:49.763357] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:41.484 [2024-11-17 01:23:49.763506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:03:41.484 [2024-11-17 01:23:49.927369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.745 [2024-11-17 01:23:50.060461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:41.745 [2024-11-17 01:23:50.060549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57183' to capture a snapshot of events at runtime. 00:03:41.745 [2024-11-17 01:23:50.060560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:41.745 [2024-11-17 01:23:50.060571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:41.745 [2024-11-17 01:23:50.060579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57183 for offline analysis/debug. 00:03:41.745 [2024-11-17 01:23:50.061512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.316 01:23:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.316 01:23:50 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:42.316 01:23:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:42.316 01:23:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:42.316 01:23:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:42.316 01:23:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:42.316 01:23:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.316 01:23:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.316 01:23:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.578 ************************************ 00:03:42.578 START TEST rpc_integrity 00:03:42.578 ************************************ 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.578 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.578 { 00:03:42.578 "name": "Malloc0", 00:03:42.578 "aliases": [ 00:03:42.578 "331949ee-b380-47a2-a9d1-9784abd4c735" 00:03:42.578 ], 00:03:42.578 "product_name": "Malloc disk", 00:03:42.578 "block_size": 512, 00:03:42.578 "num_blocks": 16384, 00:03:42.578 "uuid": "331949ee-b380-47a2-a9d1-9784abd4c735", 00:03:42.578 "assigned_rate_limits": { 00:03:42.578 "rw_ios_per_sec": 0, 00:03:42.578 "rw_mbytes_per_sec": 0, 00:03:42.578 "r_mbytes_per_sec": 0, 00:03:42.578 "w_mbytes_per_sec": 0 00:03:42.578 }, 00:03:42.578 "claimed": false, 00:03:42.578 "zoned": false, 00:03:42.578 "supported_io_types": { 00:03:42.578 "read": true, 00:03:42.578 "write": true, 00:03:42.578 "unmap": true, 00:03:42.578 "flush": true, 00:03:42.578 "reset": true, 00:03:42.578 "nvme_admin": false, 00:03:42.578 "nvme_io": false, 00:03:42.578 "nvme_io_md": false, 00:03:42.578 "write_zeroes": true, 00:03:42.578 "zcopy": true, 00:03:42.578 "get_zone_info": false, 00:03:42.578 "zone_management": false, 00:03:42.578 "zone_append": false, 00:03:42.578 "compare": false, 00:03:42.578 "compare_and_write": false, 00:03:42.578 "abort": true, 00:03:42.578 "seek_hole": false, 00:03:42.578 "seek_data": false, 00:03:42.578 "copy": true, 00:03:42.578 "nvme_iov_md": false 00:03:42.578 }, 00:03:42.578 "memory_domains": [ 00:03:42.578 { 00:03:42.578 "dma_device_id": "system", 00:03:42.578 "dma_device_type": 1 00:03:42.578 }, 00:03:42.578 { 00:03:42.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.578 "dma_device_type": 2 00:03:42.578 } 00:03:42.578 ], 00:03:42.578 "driver_specific": {} 00:03:42.578 } 00:03:42.578 ]' 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.578 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.579 [2024-11-17 01:23:50.899525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:42.579 [2024-11-17 01:23:50.899609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.579 [2024-11-17 01:23:50.899658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:42.579 [2024-11-17 01:23:50.899673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.579 [2024-11-17 01:23:50.902248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.579 [2024-11-17 01:23:50.902308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.579 Passthru0 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.579 { 00:03:42.579 "name": "Malloc0", 00:03:42.579 "aliases": [ 00:03:42.579 "331949ee-b380-47a2-a9d1-9784abd4c735" 00:03:42.579 ], 00:03:42.579 "product_name": "Malloc disk", 00:03:42.579 "block_size": 512, 00:03:42.579 "num_blocks": 16384, 00:03:42.579 "uuid": "331949ee-b380-47a2-a9d1-9784abd4c735", 00:03:42.579 "assigned_rate_limits": { 00:03:42.579 "rw_ios_per_sec": 0, 00:03:42.579 "rw_mbytes_per_sec": 0, 00:03:42.579 "r_mbytes_per_sec": 0, 00:03:42.579 "w_mbytes_per_sec": 0 00:03:42.579 }, 00:03:42.579 "claimed": true, 00:03:42.579 "claim_type": "exclusive_write", 00:03:42.579 "zoned": false, 00:03:42.579 "supported_io_types": { 00:03:42.579 "read": true, 00:03:42.579 "write": true, 00:03:42.579 "unmap": true, 00:03:42.579 "flush": true, 00:03:42.579 "reset": true, 00:03:42.579 "nvme_admin": false, 00:03:42.579 "nvme_io": false, 00:03:42.579 "nvme_io_md": false, 00:03:42.579 "write_zeroes": true, 00:03:42.579 "zcopy": true, 00:03:42.579 "get_zone_info": false, 00:03:42.579 "zone_management": false, 00:03:42.579 "zone_append": false, 00:03:42.579 "compare": false, 00:03:42.579 "compare_and_write": false, 00:03:42.579 "abort": true, 00:03:42.579 "seek_hole": false, 00:03:42.579 "seek_data": false, 00:03:42.579 "copy": true, 00:03:42.579 "nvme_iov_md": false 00:03:42.579 }, 00:03:42.579 "memory_domains": [ 00:03:42.579 { 00:03:42.579 "dma_device_id": "system", 00:03:42.579 "dma_device_type": 1 00:03:42.579 }, 00:03:42.579 { 00:03:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.579 "dma_device_type": 2 00:03:42.579 } 00:03:42.579 ], 00:03:42.579 "driver_specific": {} 00:03:42.579 }, 00:03:42.579 { 00:03:42.579 "name": "Passthru0", 00:03:42.579 "aliases": [ 00:03:42.579 "b01da335-54b4-5761-abb1-6cd4098963f8" 00:03:42.579 ], 00:03:42.579 "product_name": "passthru", 00:03:42.579 "block_size": 512, 00:03:42.579 "num_blocks": 16384, 00:03:42.579 "uuid": "b01da335-54b4-5761-abb1-6cd4098963f8", 00:03:42.579 "assigned_rate_limits": { 00:03:42.579 "rw_ios_per_sec": 0, 00:03:42.579 "rw_mbytes_per_sec": 0, 00:03:42.579 "r_mbytes_per_sec": 0, 00:03:42.579 "w_mbytes_per_sec": 0 00:03:42.579 }, 00:03:42.579 "claimed": false, 00:03:42.579 "zoned": false, 00:03:42.579 "supported_io_types": { 00:03:42.579 "read": true, 00:03:42.579 "write": true, 00:03:42.579 "unmap": true, 00:03:42.579 "flush": true, 00:03:42.579 "reset": true, 00:03:42.579 "nvme_admin": false, 00:03:42.579 "nvme_io": false, 00:03:42.579 "nvme_io_md": false, 00:03:42.579 "write_zeroes": true, 00:03:42.579 "zcopy": true, 00:03:42.579 "get_zone_info": false, 00:03:42.579 "zone_management": false, 00:03:42.579 "zone_append": false, 00:03:42.579 "compare": false, 00:03:42.579 "compare_and_write": false, 00:03:42.579 "abort": true, 00:03:42.579 "seek_hole": false, 00:03:42.579 "seek_data": false, 00:03:42.579 "copy": true, 00:03:42.579 "nvme_iov_md": false 00:03:42.579 }, 00:03:42.579 "memory_domains": [ 00:03:42.579 { 00:03:42.579 "dma_device_id": "system", 00:03:42.579 "dma_device_type": 1 00:03:42.579 }, 00:03:42.579 { 00:03:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.579 "dma_device_type": 2 00:03:42.579 } 00:03:42.579 ], 00:03:42.579 "driver_specific": { 00:03:42.579 "passthru": { 00:03:42.579 "name": "Passthru0", 00:03:42.579 "base_bdev_name": "Malloc0" 00:03:42.579 } 00:03:42.579 } 00:03:42.579 } 00:03:42.579 ]' 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.579 01:23:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.579 01:23:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.579 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.579 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.579 01:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.579 00:03:42.579 real 0m0.252s 00:03:42.579 user 0m0.126s 00:03:42.579 sys 0m0.035s 00:03:42.840 ************************************ 00:03:42.840 END TEST rpc_integrity 00:03:42.840 ************************************ 00:03:42.840 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.840 01:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.840 01:23:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:42.840 01:23:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.840 01:23:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.840 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.840 ************************************ 00:03:42.840 START TEST rpc_plugins 00:03:42.840 ************************************ 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:42.840 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.840 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:42.840 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.840 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.840 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:42.840 { 00:03:42.840 "name": "Malloc1", 00:03:42.840 "aliases": [ 00:03:42.840 "716d5326-e52a-4275-8731-580f598c608a" 00:03:42.840 ], 00:03:42.840 "product_name": "Malloc disk", 00:03:42.840 "block_size": 4096, 00:03:42.840 "num_blocks": 256, 00:03:42.840 "uuid": "716d5326-e52a-4275-8731-580f598c608a", 00:03:42.840 "assigned_rate_limits": { 00:03:42.840 "rw_ios_per_sec": 0, 00:03:42.840 "rw_mbytes_per_sec": 0, 00:03:42.840 "r_mbytes_per_sec": 0, 00:03:42.840 "w_mbytes_per_sec": 0 00:03:42.840 }, 00:03:42.840 "claimed": false, 00:03:42.840 "zoned": false, 00:03:42.840 "supported_io_types": { 00:03:42.840 "read": true, 00:03:42.840 "write": true, 00:03:42.840 "unmap": true, 00:03:42.840 "flush": true, 00:03:42.840 "reset": true, 00:03:42.840 "nvme_admin": false, 00:03:42.840 "nvme_io": false, 00:03:42.840 "nvme_io_md": false, 00:03:42.840 "write_zeroes": true, 00:03:42.840 "zcopy": true, 00:03:42.840 "get_zone_info": false, 00:03:42.840 "zone_management": false, 00:03:42.840 "zone_append": false, 00:03:42.840 "compare": false, 00:03:42.840 "compare_and_write": false, 00:03:42.840 "abort": true, 00:03:42.840 "seek_hole": false, 00:03:42.840 "seek_data": false, 00:03:42.840 "copy": true, 00:03:42.840 "nvme_iov_md": false 00:03:42.840 }, 00:03:42.840 "memory_domains": [ 00:03:42.840 { 00:03:42.840 "dma_device_id": "system", 00:03:42.840 "dma_device_type": 1 00:03:42.840 }, 00:03:42.840 { 00:03:42.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.841 "dma_device_type": 2 00:03:42.841 } 00:03:42.841 ], 00:03:42.841 "driver_specific": {} 00:03:42.841 } 00:03:42.841 ]' 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:42.841 01:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:42.841 00:03:42.841 real 0m0.120s 00:03:42.841 user 0m0.060s 00:03:42.841 sys 0m0.020s 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.841 ************************************ 00:03:42.841 END TEST rpc_plugins 00:03:42.841 ************************************ 00:03:42.841 01:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.841 01:23:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:42.841 01:23:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.841 01:23:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.841 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.841 ************************************ 00:03:42.841 START TEST rpc_trace_cmd_test 00:03:42.841 ************************************ 00:03:42.841 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:42.841 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:42.841 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:42.841 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.841 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:43.106 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57183", 00:03:43.106 "tpoint_group_mask": "0x8", 00:03:43.106 "iscsi_conn": { 00:03:43.106 "mask": "0x2", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "scsi": { 00:03:43.106 "mask": "0x4", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "bdev": { 00:03:43.106 "mask": "0x8", 00:03:43.106 "tpoint_mask": "0xffffffffffffffff" 00:03:43.106 }, 00:03:43.106 "nvmf_rdma": { 00:03:43.106 "mask": "0x10", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "nvmf_tcp": { 00:03:43.106 "mask": "0x20", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "ftl": { 00:03:43.106 "mask": "0x40", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "blobfs": { 00:03:43.106 "mask": "0x80", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "dsa": { 00:03:43.106 "mask": "0x200", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "thread": { 00:03:43.106 "mask": "0x400", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "nvme_pcie": { 00:03:43.106 "mask": "0x800", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "iaa": { 00:03:43.106 "mask": "0x1000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "nvme_tcp": { 00:03:43.106 "mask": "0x2000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "bdev_nvme": { 00:03:43.106 "mask": "0x4000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "sock": { 00:03:43.106 "mask": "0x8000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "blob": { 00:03:43.106 "mask": "0x10000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "bdev_raid": { 00:03:43.106 "mask": "0x20000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 }, 00:03:43.106 "scheduler": { 00:03:43.106 "mask": "0x40000", 00:03:43.106 "tpoint_mask": "0x0" 00:03:43.106 } 00:03:43.106 }' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:43.106 00:03:43.106 real 0m0.169s 00:03:43.106 user 0m0.145s 00:03:43.106 sys 0m0.014s 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.106 01:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.106 ************************************ 00:03:43.106 END TEST rpc_trace_cmd_test 00:03:43.106 ************************************ 00:03:43.106 01:23:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:43.106 01:23:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:43.106 01:23:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:43.106 01:23:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.106 01:23:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.106 01:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.106 ************************************ 00:03:43.106 START TEST rpc_daemon_integrity 00:03:43.106 ************************************ 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:43.106 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:43.368 { 00:03:43.368 "name": "Malloc2", 00:03:43.368 "aliases": [ 00:03:43.368 "9cc15030-33df-4735-8d45-7a44c9d4df16" 00:03:43.368 ], 00:03:43.368 "product_name": "Malloc disk", 00:03:43.368 "block_size": 512, 00:03:43.368 "num_blocks": 16384, 00:03:43.368 "uuid": "9cc15030-33df-4735-8d45-7a44c9d4df16", 00:03:43.368 "assigned_rate_limits": { 00:03:43.368 "rw_ios_per_sec": 0, 00:03:43.368 "rw_mbytes_per_sec": 0, 00:03:43.368 "r_mbytes_per_sec": 0, 00:03:43.368 "w_mbytes_per_sec": 0 00:03:43.368 }, 00:03:43.368 "claimed": false, 00:03:43.368 "zoned": false, 00:03:43.368 "supported_io_types": { 00:03:43.368 "read": true, 00:03:43.368 "write": true, 00:03:43.368 "unmap": true, 00:03:43.368 "flush": true, 00:03:43.368 "reset": true, 00:03:43.368 "nvme_admin": false, 00:03:43.368 "nvme_io": false, 00:03:43.368 "nvme_io_md": false, 00:03:43.368 "write_zeroes": true, 00:03:43.368 "zcopy": true, 00:03:43.368 "get_zone_info": false, 00:03:43.368 "zone_management": false, 00:03:43.368 "zone_append": false, 00:03:43.368 "compare": false, 00:03:43.368 "compare_and_write": false, 00:03:43.368 "abort": true, 00:03:43.368 "seek_hole": false, 00:03:43.368 "seek_data": false, 00:03:43.368 "copy": true, 00:03:43.368 "nvme_iov_md": false 00:03:43.368 }, 00:03:43.368 "memory_domains": [ 00:03:43.368 { 00:03:43.368 "dma_device_id": "system", 00:03:43.368 "dma_device_type": 1 00:03:43.368 }, 00:03:43.368 { 00:03:43.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.368 "dma_device_type": 2 00:03:43.368 } 00:03:43.368 ], 00:03:43.368 "driver_specific": {} 00:03:43.368 } 00:03:43.368 ]' 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.368 [2024-11-17 01:23:51.634687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:43.368 [2024-11-17 01:23:51.634763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:43.368 [2024-11-17 01:23:51.634787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:43.368 [2024-11-17 01:23:51.634812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:43.368 [2024-11-17 01:23:51.637251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:43.368 [2024-11-17 01:23:51.637304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:43.368 Passthru0 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.368 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.368 { 00:03:43.368 "name": "Malloc2", 00:03:43.368 "aliases": [ 00:03:43.368 "9cc15030-33df-4735-8d45-7a44c9d4df16" 00:03:43.369 ], 00:03:43.369 "product_name": "Malloc disk", 00:03:43.369 "block_size": 512, 00:03:43.369 "num_blocks": 16384, 00:03:43.369 "uuid": "9cc15030-33df-4735-8d45-7a44c9d4df16", 00:03:43.369 "assigned_rate_limits": { 00:03:43.369 "rw_ios_per_sec": 0, 00:03:43.369 "rw_mbytes_per_sec": 0, 00:03:43.369 "r_mbytes_per_sec": 0, 00:03:43.369 "w_mbytes_per_sec": 0 00:03:43.369 }, 00:03:43.369 "claimed": true, 00:03:43.369 "claim_type": "exclusive_write", 00:03:43.369 "zoned": false, 00:03:43.369 "supported_io_types": { 00:03:43.369 "read": true, 00:03:43.369 "write": true, 00:03:43.369 "unmap": true, 00:03:43.369 "flush": true, 00:03:43.369 "reset": true, 00:03:43.369 "nvme_admin": false, 00:03:43.369 "nvme_io": false, 00:03:43.369 "nvme_io_md": false, 00:03:43.369 "write_zeroes": true, 00:03:43.369 "zcopy": true, 00:03:43.369 "get_zone_info": false, 00:03:43.369 "zone_management": false, 00:03:43.369 "zone_append": false, 00:03:43.369 "compare": false, 00:03:43.369 "compare_and_write": false, 00:03:43.369 "abort": true, 00:03:43.369 "seek_hole": false, 00:03:43.369 "seek_data": false, 00:03:43.369 "copy": true, 00:03:43.369 "nvme_iov_md": false 00:03:43.369 }, 00:03:43.369 "memory_domains": [ 00:03:43.369 { 00:03:43.369 "dma_device_id": "system", 00:03:43.369 "dma_device_type": 1 00:03:43.369 }, 00:03:43.369 { 00:03:43.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.369 "dma_device_type": 2 00:03:43.369 } 00:03:43.369 ], 00:03:43.369 "driver_specific": {} 00:03:43.369 }, 00:03:43.369 { 00:03:43.369 "name": "Passthru0", 00:03:43.369 "aliases": [ 00:03:43.369 "3304c552-52ab-5091-a376-32ef68f1316b" 00:03:43.369 ], 00:03:43.369 "product_name": "passthru", 00:03:43.369 "block_size": 512, 00:03:43.369 "num_blocks": 16384, 00:03:43.369 "uuid": "3304c552-52ab-5091-a376-32ef68f1316b", 00:03:43.369 "assigned_rate_limits": { 00:03:43.369 "rw_ios_per_sec": 0, 00:03:43.369 "rw_mbytes_per_sec": 0, 00:03:43.369 "r_mbytes_per_sec": 0, 00:03:43.369 "w_mbytes_per_sec": 0 00:03:43.369 }, 00:03:43.369 "claimed": false, 00:03:43.369 "zoned": false, 00:03:43.369 "supported_io_types": { 00:03:43.369 "read": true, 00:03:43.369 "write": true, 00:03:43.369 "unmap": true, 00:03:43.369 "flush": true, 00:03:43.369 "reset": true, 00:03:43.369 "nvme_admin": false, 00:03:43.369 "nvme_io": false, 00:03:43.369 "nvme_io_md": false, 00:03:43.369 "write_zeroes": true, 00:03:43.369 "zcopy": true, 00:03:43.369 "get_zone_info": false, 00:03:43.369 "zone_management": false, 00:03:43.369 "zone_append": false, 00:03:43.369 "compare": false, 00:03:43.369 "compare_and_write": false, 00:03:43.369 "abort": true, 00:03:43.369 "seek_hole": false, 00:03:43.369 "seek_data": false, 00:03:43.369 "copy": true, 00:03:43.369 "nvme_iov_md": false 00:03:43.369 }, 00:03:43.369 "memory_domains": [ 00:03:43.369 { 00:03:43.369 "dma_device_id": "system", 00:03:43.369 "dma_device_type": 1 00:03:43.369 }, 00:03:43.369 { 00:03:43.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.369 "dma_device_type": 2 00:03:43.369 } 00:03:43.369 ], 00:03:43.369 "driver_specific": { 00:03:43.369 "passthru": { 00:03:43.369 "name": "Passthru0", 00:03:43.369 "base_bdev_name": "Malloc2" 00:03:43.369 } 00:03:43.369 } 00:03:43.369 } 00:03:43.369 ]' 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:43.369 00:03:43.369 real 0m0.243s 00:03:43.369 user 0m0.124s 00:03:43.369 sys 0m0.036s 00:03:43.369 ************************************ 00:03:43.369 END TEST rpc_daemon_integrity 00:03:43.369 ************************************ 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.369 01:23:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 01:23:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:43.369 01:23:51 rpc -- rpc/rpc.sh@84 -- # killprocess 57183 00:03:43.369 01:23:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 57183 ']' 00:03:43.369 01:23:51 rpc -- common/autotest_common.sh@958 -- # kill -0 57183 00:03:43.369 01:23:51 rpc -- common/autotest_common.sh@959 -- # uname 00:03:43.369 01:23:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.369 01:23:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57183 00:03:43.630 01:23:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:43.630 killing process with pid 57183 00:03:43.630 01:23:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:43.630 01:23:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57183' 00:03:43.630 01:23:51 rpc -- common/autotest_common.sh@973 -- # kill 57183 00:03:43.630 01:23:51 rpc -- common/autotest_common.sh@978 -- # wait 57183 00:03:44.573 00:03:44.573 real 0m3.488s 00:03:44.573 user 0m3.821s 00:03:44.573 sys 0m0.714s 00:03:44.573 01:23:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.573 01:23:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.573 ************************************ 00:03:44.573 END TEST rpc 00:03:44.573 ************************************ 00:03:44.834 01:23:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:44.834 01:23:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.834 01:23:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.834 01:23:53 -- common/autotest_common.sh@10 -- # set +x 00:03:44.834 ************************************ 00:03:44.834 START TEST skip_rpc 00:03:44.834 ************************************ 00:03:44.834 01:23:53 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:44.834 * Looking for test storage... 00:03:44.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:44.834 01:23:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.834 01:23:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.834 01:23:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.834 01:23:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.834 01:23:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.835 01:23:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.835 --rc genhtml_branch_coverage=1 00:03:44.835 --rc genhtml_function_coverage=1 00:03:44.835 --rc genhtml_legend=1 00:03:44.835 --rc geninfo_all_blocks=1 00:03:44.835 --rc geninfo_unexecuted_blocks=1 00:03:44.835 00:03:44.835 ' 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.835 --rc genhtml_branch_coverage=1 00:03:44.835 --rc genhtml_function_coverage=1 00:03:44.835 --rc genhtml_legend=1 00:03:44.835 --rc geninfo_all_blocks=1 00:03:44.835 --rc geninfo_unexecuted_blocks=1 00:03:44.835 00:03:44.835 ' 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.835 --rc genhtml_branch_coverage=1 00:03:44.835 --rc genhtml_function_coverage=1 00:03:44.835 --rc genhtml_legend=1 00:03:44.835 --rc geninfo_all_blocks=1 00:03:44.835 --rc geninfo_unexecuted_blocks=1 00:03:44.835 00:03:44.835 ' 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.835 --rc genhtml_branch_coverage=1 00:03:44.835 --rc genhtml_function_coverage=1 00:03:44.835 --rc genhtml_legend=1 00:03:44.835 --rc geninfo_all_blocks=1 00:03:44.835 --rc geninfo_unexecuted_blocks=1 00:03:44.835 00:03:44.835 ' 00:03:44.835 01:23:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:44.835 01:23:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:44.835 01:23:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.835 01:23:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.835 ************************************ 00:03:44.835 START TEST skip_rpc 00:03:44.835 ************************************ 00:03:44.835 01:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:44.835 01:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57396 00:03:44.835 01:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.835 01:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:44.835 01:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:45.096 [2024-11-17 01:23:53.302425] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:45.096 [2024-11-17 01:23:53.303197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57396 ] 00:03:45.096 [2024-11-17 01:23:53.467943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.356 [2024-11-17 01:23:53.600157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57396 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57396 ']' 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57396 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57396 00:03:50.637 killing process with pid 57396 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57396' 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57396 00:03:50.637 01:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57396 00:03:51.206 ************************************ 00:03:51.206 END TEST skip_rpc 00:03:51.206 ************************************ 00:03:51.206 00:03:51.206 real 0m6.201s 00:03:51.206 user 0m5.736s 00:03:51.206 sys 0m0.355s 00:03:51.206 01:23:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.206 01:23:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.206 01:23:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:51.206 01:23:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.206 01:23:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.206 01:23:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.206 ************************************ 00:03:51.206 START TEST skip_rpc_with_json 00:03:51.206 ************************************ 00:03:51.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57489 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57489 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57489 ']' 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.206 01:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.206 [2024-11-17 01:23:59.560326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:51.206 [2024-11-17 01:23:59.561239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57489 ] 00:03:51.468 [2024-11-17 01:23:59.728897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.468 [2024-11-17 01:23:59.819619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.038 [2024-11-17 01:24:00.393767] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:52.038 request: 00:03:52.038 { 00:03:52.038 "trtype": "tcp", 00:03:52.038 "method": "nvmf_get_transports", 00:03:52.038 "req_id": 1 00:03:52.038 } 00:03:52.038 Got JSON-RPC error response 00:03:52.038 response: 00:03:52.038 { 00:03:52.038 "code": -19, 00:03:52.038 "message": "No such device" 00:03:52.038 } 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.038 [2024-11-17 01:24:00.405867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.038 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.299 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.299 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:52.299 { 00:03:52.299 "subsystems": [ 00:03:52.299 { 00:03:52.299 "subsystem": "fsdev", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "fsdev_set_opts", 00:03:52.299 "params": { 00:03:52.299 "fsdev_io_pool_size": 65535, 00:03:52.299 "fsdev_io_cache_size": 256 00:03:52.299 } 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "keyring", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "iobuf", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "iobuf_set_options", 00:03:52.299 "params": { 00:03:52.299 "small_pool_count": 8192, 00:03:52.299 "large_pool_count": 1024, 00:03:52.299 "small_bufsize": 8192, 00:03:52.299 "large_bufsize": 135168, 00:03:52.299 "enable_numa": false 00:03:52.299 } 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "sock", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "sock_set_default_impl", 00:03:52.299 "params": { 00:03:52.299 "impl_name": "posix" 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "sock_impl_set_options", 00:03:52.299 "params": { 00:03:52.299 "impl_name": "ssl", 00:03:52.299 "recv_buf_size": 4096, 00:03:52.299 "send_buf_size": 4096, 00:03:52.299 "enable_recv_pipe": true, 00:03:52.299 "enable_quickack": false, 00:03:52.299 "enable_placement_id": 0, 00:03:52.299 "enable_zerocopy_send_server": true, 00:03:52.299 "enable_zerocopy_send_client": false, 00:03:52.299 "zerocopy_threshold": 0, 00:03:52.299 "tls_version": 0, 00:03:52.299 "enable_ktls": false 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "sock_impl_set_options", 00:03:52.299 "params": { 00:03:52.299 "impl_name": "posix", 00:03:52.299 "recv_buf_size": 2097152, 00:03:52.299 "send_buf_size": 2097152, 00:03:52.299 "enable_recv_pipe": true, 00:03:52.299 "enable_quickack": false, 00:03:52.299 "enable_placement_id": 0, 00:03:52.299 "enable_zerocopy_send_server": true, 00:03:52.299 "enable_zerocopy_send_client": false, 00:03:52.299 "zerocopy_threshold": 0, 00:03:52.299 "tls_version": 0, 00:03:52.299 "enable_ktls": false 00:03:52.299 } 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "vmd", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "accel", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "accel_set_options", 00:03:52.299 "params": { 00:03:52.299 "small_cache_size": 128, 00:03:52.299 "large_cache_size": 16, 00:03:52.299 "task_count": 2048, 00:03:52.299 "sequence_count": 2048, 00:03:52.299 "buf_count": 2048 00:03:52.299 } 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "bdev", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "bdev_set_options", 00:03:52.299 "params": { 00:03:52.299 "bdev_io_pool_size": 65535, 00:03:52.299 "bdev_io_cache_size": 256, 00:03:52.299 "bdev_auto_examine": true, 00:03:52.299 "iobuf_small_cache_size": 128, 00:03:52.299 "iobuf_large_cache_size": 16 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "bdev_raid_set_options", 00:03:52.299 "params": { 00:03:52.299 "process_window_size_kb": 1024, 00:03:52.299 "process_max_bandwidth_mb_sec": 0 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "bdev_iscsi_set_options", 00:03:52.299 "params": { 00:03:52.299 "timeout_sec": 30 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "bdev_nvme_set_options", 00:03:52.299 "params": { 00:03:52.299 "action_on_timeout": "none", 00:03:52.299 "timeout_us": 0, 00:03:52.299 "timeout_admin_us": 0, 00:03:52.299 "keep_alive_timeout_ms": 10000, 00:03:52.299 "arbitration_burst": 0, 00:03:52.299 "low_priority_weight": 0, 00:03:52.299 "medium_priority_weight": 0, 00:03:52.299 "high_priority_weight": 0, 00:03:52.299 "nvme_adminq_poll_period_us": 10000, 00:03:52.299 "nvme_ioq_poll_period_us": 0, 00:03:52.299 "io_queue_requests": 0, 00:03:52.299 "delay_cmd_submit": true, 00:03:52.299 "transport_retry_count": 4, 00:03:52.299 "bdev_retry_count": 3, 00:03:52.299 "transport_ack_timeout": 0, 00:03:52.299 "ctrlr_loss_timeout_sec": 0, 00:03:52.299 "reconnect_delay_sec": 0, 00:03:52.299 "fast_io_fail_timeout_sec": 0, 00:03:52.299 "disable_auto_failback": false, 00:03:52.299 "generate_uuids": false, 00:03:52.299 "transport_tos": 0, 00:03:52.299 "nvme_error_stat": false, 00:03:52.299 "rdma_srq_size": 0, 00:03:52.299 "io_path_stat": false, 00:03:52.299 "allow_accel_sequence": false, 00:03:52.299 "rdma_max_cq_size": 0, 00:03:52.299 "rdma_cm_event_timeout_ms": 0, 00:03:52.299 "dhchap_digests": [ 00:03:52.299 "sha256", 00:03:52.299 "sha384", 00:03:52.299 "sha512" 00:03:52.299 ], 00:03:52.299 "dhchap_dhgroups": [ 00:03:52.299 "null", 00:03:52.299 "ffdhe2048", 00:03:52.299 "ffdhe3072", 00:03:52.299 "ffdhe4096", 00:03:52.299 "ffdhe6144", 00:03:52.299 "ffdhe8192" 00:03:52.299 ] 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "bdev_nvme_set_hotplug", 00:03:52.299 "params": { 00:03:52.299 "period_us": 100000, 00:03:52.299 "enable": false 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "bdev_wait_for_examine" 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "scsi", 00:03:52.299 "config": null 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "scheduler", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "framework_set_scheduler", 00:03:52.299 "params": { 00:03:52.299 "name": "static" 00:03:52.299 } 00:03:52.299 } 00:03:52.299 ] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "vhost_scsi", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "vhost_blk", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "ublk", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "nbd", 00:03:52.299 "config": [] 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "subsystem": "nvmf", 00:03:52.299 "config": [ 00:03:52.299 { 00:03:52.299 "method": "nvmf_set_config", 00:03:52.299 "params": { 00:03:52.299 "discovery_filter": "match_any", 00:03:52.299 "admin_cmd_passthru": { 00:03:52.299 "identify_ctrlr": false 00:03:52.299 }, 00:03:52.299 "dhchap_digests": [ 00:03:52.299 "sha256", 00:03:52.299 "sha384", 00:03:52.299 "sha512" 00:03:52.299 ], 00:03:52.299 "dhchap_dhgroups": [ 00:03:52.299 "null", 00:03:52.299 "ffdhe2048", 00:03:52.299 "ffdhe3072", 00:03:52.299 "ffdhe4096", 00:03:52.299 "ffdhe6144", 00:03:52.299 "ffdhe8192" 00:03:52.299 ] 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "nvmf_set_max_subsystems", 00:03:52.299 "params": { 00:03:52.299 "max_subsystems": 1024 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "nvmf_set_crdt", 00:03:52.299 "params": { 00:03:52.299 "crdt1": 0, 00:03:52.299 "crdt2": 0, 00:03:52.299 "crdt3": 0 00:03:52.299 } 00:03:52.299 }, 00:03:52.299 { 00:03:52.299 "method": "nvmf_create_transport", 00:03:52.299 "params": { 00:03:52.299 "trtype": "TCP", 00:03:52.299 "max_queue_depth": 128, 00:03:52.299 "max_io_qpairs_per_ctrlr": 127, 00:03:52.299 "in_capsule_data_size": 4096, 00:03:52.299 "max_io_size": 131072, 00:03:52.299 "io_unit_size": 131072, 00:03:52.299 "max_aq_depth": 128, 00:03:52.299 "num_shared_buffers": 511, 00:03:52.299 "buf_cache_size": 4294967295, 00:03:52.299 "dif_insert_or_strip": false, 00:03:52.299 "zcopy": false, 00:03:52.299 "c2h_success": true, 00:03:52.299 "sock_priority": 0, 00:03:52.299 "abort_timeout_sec": 1, 00:03:52.299 "ack_timeout": 0, 00:03:52.299 "data_wr_pool_size": 0 00:03:52.300 } 00:03:52.300 } 00:03:52.300 ] 00:03:52.300 }, 00:03:52.300 { 00:03:52.300 "subsystem": "iscsi", 00:03:52.300 "config": [ 00:03:52.300 { 00:03:52.300 "method": "iscsi_set_options", 00:03:52.300 "params": { 00:03:52.300 "node_base": "iqn.2016-06.io.spdk", 00:03:52.300 "max_sessions": 128, 00:03:52.300 "max_connections_per_session": 2, 00:03:52.300 "max_queue_depth": 64, 00:03:52.300 "default_time2wait": 2, 00:03:52.300 "default_time2retain": 20, 00:03:52.300 "first_burst_length": 8192, 00:03:52.300 "immediate_data": true, 00:03:52.300 "allow_duplicated_isid": false, 00:03:52.300 "error_recovery_level": 0, 00:03:52.300 "nop_timeout": 60, 00:03:52.300 "nop_in_interval": 30, 00:03:52.300 "disable_chap": false, 00:03:52.300 "require_chap": false, 00:03:52.300 "mutual_chap": false, 00:03:52.300 "chap_group": 0, 00:03:52.300 "max_large_datain_per_connection": 64, 00:03:52.300 "max_r2t_per_connection": 4, 00:03:52.300 "pdu_pool_size": 36864, 00:03:52.300 "immediate_data_pool_size": 16384, 00:03:52.300 "data_out_pool_size": 2048 00:03:52.300 } 00:03:52.300 } 00:03:52.300 ] 00:03:52.300 } 00:03:52.300 ] 00:03:52.300 } 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57489 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57489 ']' 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57489 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57489 00:03:52.300 killing process with pid 57489 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57489' 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57489 00:03:52.300 01:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57489 00:03:53.683 01:24:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57528 00:03:53.683 01:24:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.683 01:24:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57528 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57528 ']' 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57528 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57528 00:03:58.972 killing process with pid 57528 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57528' 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57528 00:03:58.972 01:24:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57528 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.539 ************************************ 00:03:59.539 END TEST skip_rpc_with_json 00:03:59.539 ************************************ 00:03:59.539 00:03:59.539 real 0m8.475s 00:03:59.539 user 0m8.141s 00:03:59.539 sys 0m0.567s 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.539 01:24:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:59.539 01:24:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.539 01:24:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.539 01:24:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.539 ************************************ 00:03:59.539 START TEST skip_rpc_with_delay 00:03:59.539 ************************************ 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.539 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:59.540 01:24:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.798 [2024-11-17 01:24:08.053642] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:59.798 00:03:59.798 real 0m0.111s 00:03:59.798 user 0m0.056s 00:03:59.798 sys 0m0.053s 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.798 ************************************ 00:03:59.798 END TEST skip_rpc_with_delay 00:03:59.798 ************************************ 00:03:59.798 01:24:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:59.798 01:24:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:59.798 01:24:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:59.798 01:24:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:59.798 01:24:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.798 01:24:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.798 01:24:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.798 ************************************ 00:03:59.798 START TEST exit_on_failed_rpc_init 00:03:59.798 ************************************ 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57645 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57645 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57645 ']' 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.798 01:24:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:59.798 [2024-11-17 01:24:08.216995] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:59.798 [2024-11-17 01:24:08.217104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57645 ] 00:04:00.057 [2024-11-17 01:24:08.372915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.057 [2024-11-17 01:24:08.447195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.628 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.628 01:24:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.628 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.628 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:00.628 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:00.628 [2024-11-17 01:24:09.078326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:00.628 [2024-11-17 01:24:09.078439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57663 ] 00:04:00.890 [2024-11-17 01:24:09.238686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.890 [2024-11-17 01:24:09.337957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:00.890 [2024-11-17 01:24:09.338034] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:00.890 [2024-11-17 01:24:09.338046] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:00.890 [2024-11-17 01:24:09.338059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57645 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57645 ']' 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57645 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57645 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.153 killing process with pid 57645 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57645' 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57645 00:04:01.153 01:24:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57645 00:04:03.114 00:04:03.114 real 0m3.007s 00:04:03.114 user 0m3.288s 00:04:03.114 sys 0m0.382s 00:04:03.114 01:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.114 01:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.114 ************************************ 00:04:03.114 END TEST exit_on_failed_rpc_init 00:04:03.114 ************************************ 00:04:03.114 01:24:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:03.114 00:04:03.114 real 0m18.121s 00:04:03.114 user 0m17.364s 00:04:03.114 sys 0m1.525s 00:04:03.114 01:24:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.114 01:24:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.114 ************************************ 00:04:03.114 END TEST skip_rpc 00:04:03.114 ************************************ 00:04:03.114 01:24:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:03.114 01:24:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.114 01:24:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.114 01:24:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.114 ************************************ 00:04:03.114 START TEST rpc_client 00:04:03.114 ************************************ 00:04:03.114 01:24:11 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:03.114 * Looking for test storage... 00:04:03.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:03.114 01:24:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.114 01:24:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.114 01:24:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.114 01:24:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:03.114 01:24:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.115 01:24:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.115 --rc genhtml_branch_coverage=1 00:04:03.115 --rc genhtml_function_coverage=1 00:04:03.115 --rc genhtml_legend=1 00:04:03.115 --rc geninfo_all_blocks=1 00:04:03.115 --rc geninfo_unexecuted_blocks=1 00:04:03.115 00:04:03.115 ' 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.115 --rc genhtml_branch_coverage=1 00:04:03.115 --rc genhtml_function_coverage=1 00:04:03.115 --rc genhtml_legend=1 00:04:03.115 --rc geninfo_all_blocks=1 00:04:03.115 --rc geninfo_unexecuted_blocks=1 00:04:03.115 00:04:03.115 ' 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.115 --rc genhtml_branch_coverage=1 00:04:03.115 --rc genhtml_function_coverage=1 00:04:03.115 --rc genhtml_legend=1 00:04:03.115 --rc geninfo_all_blocks=1 00:04:03.115 --rc geninfo_unexecuted_blocks=1 00:04:03.115 00:04:03.115 ' 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.115 --rc genhtml_branch_coverage=1 00:04:03.115 --rc genhtml_function_coverage=1 00:04:03.115 --rc genhtml_legend=1 00:04:03.115 --rc geninfo_all_blocks=1 00:04:03.115 --rc geninfo_unexecuted_blocks=1 00:04:03.115 00:04:03.115 ' 00:04:03.115 01:24:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:03.115 OK 00:04:03.115 01:24:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:03.115 00:04:03.115 real 0m0.182s 00:04:03.115 user 0m0.097s 00:04:03.115 sys 0m0.092s 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.115 01:24:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:03.115 ************************************ 00:04:03.115 END TEST rpc_client 00:04:03.115 ************************************ 00:04:03.115 01:24:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:03.115 01:24:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.115 01:24:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.115 01:24:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.115 ************************************ 00:04:03.115 START TEST json_config 00:04:03.115 ************************************ 00:04:03.115 01:24:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:03.115 01:24:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.115 01:24:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.115 01:24:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.115 01:24:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.115 01:24:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.115 01:24:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.115 01:24:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.115 01:24:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.115 01:24:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.115 01:24:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:03.115 01:24:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.115 01:24:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.115 01:24:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.115 01:24:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.115 01:24:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.115 01:24:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:03.115 01:24:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.378 01:24:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.378 01:24:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.378 01:24:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:03.378 01:24:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.378 01:24:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.378 --rc genhtml_branch_coverage=1 00:04:03.378 --rc genhtml_function_coverage=1 00:04:03.378 --rc genhtml_legend=1 00:04:03.378 --rc geninfo_all_blocks=1 00:04:03.378 --rc geninfo_unexecuted_blocks=1 00:04:03.378 00:04:03.378 ' 00:04:03.378 01:24:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.378 --rc genhtml_branch_coverage=1 00:04:03.378 --rc genhtml_function_coverage=1 00:04:03.378 --rc genhtml_legend=1 00:04:03.378 --rc geninfo_all_blocks=1 00:04:03.378 --rc geninfo_unexecuted_blocks=1 00:04:03.378 00:04:03.378 ' 00:04:03.378 01:24:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.378 --rc genhtml_branch_coverage=1 00:04:03.378 --rc genhtml_function_coverage=1 00:04:03.378 --rc genhtml_legend=1 00:04:03.378 --rc geninfo_all_blocks=1 00:04:03.378 --rc geninfo_unexecuted_blocks=1 00:04:03.378 00:04:03.378 ' 00:04:03.378 01:24:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.378 --rc genhtml_branch_coverage=1 00:04:03.378 --rc genhtml_function_coverage=1 00:04:03.378 --rc genhtml_legend=1 00:04:03.378 --rc geninfo_all_blocks=1 00:04:03.378 --rc geninfo_unexecuted_blocks=1 00:04:03.378 00:04:03.378 ' 00:04:03.378 01:24:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6bcd302-db51-499b-b3da-54e4b86a5713 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b6bcd302-db51-499b-b3da-54e4b86a5713 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.378 01:24:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:03.378 01:24:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.378 01:24:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.378 01:24:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.378 01:24:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.378 01:24:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.378 01:24:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.378 01:24:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.379 01:24:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:03.379 01:24:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@51 -- # : 0 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.379 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.379 01:24:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:03.379 WARNING: No tests are enabled so not running JSON configuration tests 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:03.379 01:24:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:03.379 00:04:03.379 real 0m0.138s 00:04:03.379 user 0m0.081s 00:04:03.379 sys 0m0.061s 00:04:03.379 01:24:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.379 01:24:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.379 ************************************ 00:04:03.379 END TEST json_config 00:04:03.379 ************************************ 00:04:03.379 01:24:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:03.379 01:24:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.379 01:24:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.379 01:24:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.379 ************************************ 00:04:03.379 START TEST json_config_extra_key 00:04:03.379 ************************************ 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.379 --rc genhtml_branch_coverage=1 00:04:03.379 --rc genhtml_function_coverage=1 00:04:03.379 --rc genhtml_legend=1 00:04:03.379 --rc geninfo_all_blocks=1 00:04:03.379 --rc geninfo_unexecuted_blocks=1 00:04:03.379 00:04:03.379 ' 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.379 --rc genhtml_branch_coverage=1 00:04:03.379 --rc genhtml_function_coverage=1 00:04:03.379 --rc genhtml_legend=1 00:04:03.379 --rc geninfo_all_blocks=1 00:04:03.379 --rc geninfo_unexecuted_blocks=1 00:04:03.379 00:04:03.379 ' 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.379 --rc genhtml_branch_coverage=1 00:04:03.379 --rc genhtml_function_coverage=1 00:04:03.379 --rc genhtml_legend=1 00:04:03.379 --rc geninfo_all_blocks=1 00:04:03.379 --rc geninfo_unexecuted_blocks=1 00:04:03.379 00:04:03.379 ' 00:04:03.379 01:24:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.379 --rc genhtml_branch_coverage=1 00:04:03.379 --rc genhtml_function_coverage=1 00:04:03.379 --rc genhtml_legend=1 00:04:03.379 --rc geninfo_all_blocks=1 00:04:03.379 --rc geninfo_unexecuted_blocks=1 00:04:03.379 00:04:03.379 ' 00:04:03.379 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6bcd302-db51-499b-b3da-54e4b86a5713 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b6bcd302-db51-499b-b3da-54e4b86a5713 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.379 01:24:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.379 01:24:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.379 01:24:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.380 01:24:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.380 01:24:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.380 01:24:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:03.380 01:24:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.380 01:24:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:03.380 INFO: launching applications... 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:03.380 01:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57857 00:04:03.380 Waiting for target to run... 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57857 /var/tmp/spdk_tgt.sock 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57857 ']' 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.380 01:24:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:03.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.380 01:24:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:03.641 [2024-11-17 01:24:11.860828] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:03.641 [2024-11-17 01:24:11.860955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:04:03.901 [2024-11-17 01:24:12.171287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.901 [2024-11-17 01:24:12.287298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.472 01:24:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.472 01:24:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:04.472 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:04.472 INFO: shutting down applications... 00:04:04.472 01:24:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:04.472 01:24:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57857 ]] 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57857 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:04:04.472 01:24:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.043 01:24:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.043 01:24:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.043 01:24:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:04:05.043 01:24:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.616 01:24:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.616 01:24:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.616 01:24:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:04:05.616 01:24:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.876 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.876 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.876 01:24:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:04:05.876 01:24:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.448 SPDK target shutdown done 00:04:06.448 01:24:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.448 Success 00:04:06.448 01:24:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:06.448 00:04:06.448 real 0m3.188s 00:04:06.448 user 0m2.847s 00:04:06.448 sys 0m0.414s 00:04:06.448 01:24:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.448 01:24:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.448 ************************************ 00:04:06.448 END TEST json_config_extra_key 00:04:06.448 ************************************ 00:04:06.448 01:24:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.448 01:24:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.448 01:24:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.448 01:24:14 -- common/autotest_common.sh@10 -- # set +x 00:04:06.448 ************************************ 00:04:06.449 START TEST alias_rpc 00:04:06.449 ************************************ 00:04:06.449 01:24:14 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.711 * Looking for test storage... 00:04:06.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.711 01:24:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.711 --rc genhtml_branch_coverage=1 00:04:06.711 --rc genhtml_function_coverage=1 00:04:06.711 --rc genhtml_legend=1 00:04:06.711 --rc geninfo_all_blocks=1 00:04:06.711 --rc geninfo_unexecuted_blocks=1 00:04:06.711 00:04:06.711 ' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.711 --rc genhtml_branch_coverage=1 00:04:06.711 --rc genhtml_function_coverage=1 00:04:06.711 --rc genhtml_legend=1 00:04:06.711 --rc geninfo_all_blocks=1 00:04:06.711 --rc geninfo_unexecuted_blocks=1 00:04:06.711 00:04:06.711 ' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.711 --rc genhtml_branch_coverage=1 00:04:06.711 --rc genhtml_function_coverage=1 00:04:06.711 --rc genhtml_legend=1 00:04:06.711 --rc geninfo_all_blocks=1 00:04:06.711 --rc geninfo_unexecuted_blocks=1 00:04:06.711 00:04:06.711 ' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.711 --rc genhtml_branch_coverage=1 00:04:06.711 --rc genhtml_function_coverage=1 00:04:06.711 --rc genhtml_legend=1 00:04:06.711 --rc geninfo_all_blocks=1 00:04:06.711 --rc geninfo_unexecuted_blocks=1 00:04:06.711 00:04:06.711 ' 00:04:06.711 01:24:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.711 01:24:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57950 00:04:06.711 01:24:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57950 00:04:06.711 01:24:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57950 ']' 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.711 01:24:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.711 [2024-11-17 01:24:15.070245] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:06.711 [2024-11-17 01:24:15.070365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57950 ] 00:04:06.972 [2024-11-17 01:24:15.227460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.972 [2024-11-17 01:24:15.320933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.544 01:24:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.544 01:24:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:07.544 01:24:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:07.806 01:24:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57950 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57950 ']' 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57950 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57950 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.806 killing process with pid 57950 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57950' 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 57950 00:04:07.806 01:24:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 57950 00:04:09.193 00:04:09.193 real 0m2.748s 00:04:09.193 user 0m2.872s 00:04:09.193 sys 0m0.368s 00:04:09.193 01:24:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.193 ************************************ 00:04:09.193 END TEST alias_rpc 00:04:09.193 ************************************ 00:04:09.193 01:24:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.193 01:24:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:09.193 01:24:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:09.193 01:24:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.193 01:24:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.193 01:24:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.193 ************************************ 00:04:09.193 START TEST spdkcli_tcp 00:04:09.193 ************************************ 00:04:09.193 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:09.455 * Looking for test storage... 00:04:09.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.455 01:24:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.455 --rc genhtml_branch_coverage=1 00:04:09.455 --rc genhtml_function_coverage=1 00:04:09.455 --rc genhtml_legend=1 00:04:09.455 --rc geninfo_all_blocks=1 00:04:09.455 --rc geninfo_unexecuted_blocks=1 00:04:09.455 00:04:09.455 ' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.455 --rc genhtml_branch_coverage=1 00:04:09.455 --rc genhtml_function_coverage=1 00:04:09.455 --rc genhtml_legend=1 00:04:09.455 --rc geninfo_all_blocks=1 00:04:09.455 --rc geninfo_unexecuted_blocks=1 00:04:09.455 00:04:09.455 ' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.455 --rc genhtml_branch_coverage=1 00:04:09.455 --rc genhtml_function_coverage=1 00:04:09.455 --rc genhtml_legend=1 00:04:09.455 --rc geninfo_all_blocks=1 00:04:09.455 --rc geninfo_unexecuted_blocks=1 00:04:09.455 00:04:09.455 ' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.455 --rc genhtml_branch_coverage=1 00:04:09.455 --rc genhtml_function_coverage=1 00:04:09.455 --rc genhtml_legend=1 00:04:09.455 --rc geninfo_all_blocks=1 00:04:09.455 --rc geninfo_unexecuted_blocks=1 00:04:09.455 00:04:09.455 ' 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58046 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58046 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58046 ']' 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.455 01:24:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.455 01:24:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.455 [2024-11-17 01:24:17.859825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:09.455 [2024-11-17 01:24:17.859940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58046 ] 00:04:09.718 [2024-11-17 01:24:18.019051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.718 [2024-11-17 01:24:18.118003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.718 [2024-11-17 01:24:18.118081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.289 01:24:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.289 01:24:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:10.289 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.289 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58063 00:04:10.289 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:10.548 [ 00:04:10.548 "bdev_malloc_delete", 00:04:10.548 "bdev_malloc_create", 00:04:10.548 "bdev_null_resize", 00:04:10.548 "bdev_null_delete", 00:04:10.548 "bdev_null_create", 00:04:10.548 "bdev_nvme_cuse_unregister", 00:04:10.548 "bdev_nvme_cuse_register", 00:04:10.548 "bdev_opal_new_user", 00:04:10.548 "bdev_opal_set_lock_state", 00:04:10.548 "bdev_opal_delete", 00:04:10.548 "bdev_opal_get_info", 00:04:10.548 "bdev_opal_create", 00:04:10.548 "bdev_nvme_opal_revert", 00:04:10.548 "bdev_nvme_opal_init", 00:04:10.548 "bdev_nvme_send_cmd", 00:04:10.548 "bdev_nvme_set_keys", 00:04:10.548 "bdev_nvme_get_path_iostat", 00:04:10.548 "bdev_nvme_get_mdns_discovery_info", 00:04:10.548 "bdev_nvme_stop_mdns_discovery", 00:04:10.548 "bdev_nvme_start_mdns_discovery", 00:04:10.548 "bdev_nvme_set_multipath_policy", 00:04:10.548 "bdev_nvme_set_preferred_path", 00:04:10.548 "bdev_nvme_get_io_paths", 00:04:10.548 "bdev_nvme_remove_error_injection", 00:04:10.548 "bdev_nvme_add_error_injection", 00:04:10.548 "bdev_nvme_get_discovery_info", 00:04:10.548 "bdev_nvme_stop_discovery", 00:04:10.548 "bdev_nvme_start_discovery", 00:04:10.548 "bdev_nvme_get_controller_health_info", 00:04:10.548 "bdev_nvme_disable_controller", 00:04:10.548 "bdev_nvme_enable_controller", 00:04:10.548 "bdev_nvme_reset_controller", 00:04:10.548 "bdev_nvme_get_transport_statistics", 00:04:10.548 "bdev_nvme_apply_firmware", 00:04:10.548 "bdev_nvme_detach_controller", 00:04:10.548 "bdev_nvme_get_controllers", 00:04:10.548 "bdev_nvme_attach_controller", 00:04:10.548 "bdev_nvme_set_hotplug", 00:04:10.548 "bdev_nvme_set_options", 00:04:10.548 "bdev_passthru_delete", 00:04:10.548 "bdev_passthru_create", 00:04:10.548 "bdev_lvol_set_parent_bdev", 00:04:10.548 "bdev_lvol_set_parent", 00:04:10.548 "bdev_lvol_check_shallow_copy", 00:04:10.548 "bdev_lvol_start_shallow_copy", 00:04:10.548 "bdev_lvol_grow_lvstore", 00:04:10.548 "bdev_lvol_get_lvols", 00:04:10.548 "bdev_lvol_get_lvstores", 00:04:10.548 "bdev_lvol_delete", 00:04:10.548 "bdev_lvol_set_read_only", 00:04:10.548 "bdev_lvol_resize", 00:04:10.548 "bdev_lvol_decouple_parent", 00:04:10.548 "bdev_lvol_inflate", 00:04:10.548 "bdev_lvol_rename", 00:04:10.548 "bdev_lvol_clone_bdev", 00:04:10.548 "bdev_lvol_clone", 00:04:10.548 "bdev_lvol_snapshot", 00:04:10.548 "bdev_lvol_create", 00:04:10.548 "bdev_lvol_delete_lvstore", 00:04:10.548 "bdev_lvol_rename_lvstore", 00:04:10.548 "bdev_lvol_create_lvstore", 00:04:10.548 "bdev_raid_set_options", 00:04:10.548 "bdev_raid_remove_base_bdev", 00:04:10.548 "bdev_raid_add_base_bdev", 00:04:10.548 "bdev_raid_delete", 00:04:10.548 "bdev_raid_create", 00:04:10.548 "bdev_raid_get_bdevs", 00:04:10.548 "bdev_error_inject_error", 00:04:10.549 "bdev_error_delete", 00:04:10.549 "bdev_error_create", 00:04:10.549 "bdev_split_delete", 00:04:10.549 "bdev_split_create", 00:04:10.549 "bdev_delay_delete", 00:04:10.549 "bdev_delay_create", 00:04:10.549 "bdev_delay_update_latency", 00:04:10.549 "bdev_zone_block_delete", 00:04:10.549 "bdev_zone_block_create", 00:04:10.549 "blobfs_create", 00:04:10.549 "blobfs_detect", 00:04:10.549 "blobfs_set_cache_size", 00:04:10.549 "bdev_xnvme_delete", 00:04:10.549 "bdev_xnvme_create", 00:04:10.549 "bdev_aio_delete", 00:04:10.549 "bdev_aio_rescan", 00:04:10.549 "bdev_aio_create", 00:04:10.549 "bdev_ftl_set_property", 00:04:10.549 "bdev_ftl_get_properties", 00:04:10.549 "bdev_ftl_get_stats", 00:04:10.549 "bdev_ftl_unmap", 00:04:10.549 "bdev_ftl_unload", 00:04:10.549 "bdev_ftl_delete", 00:04:10.549 "bdev_ftl_load", 00:04:10.549 "bdev_ftl_create", 00:04:10.549 "bdev_virtio_attach_controller", 00:04:10.549 "bdev_virtio_scsi_get_devices", 00:04:10.549 "bdev_virtio_detach_controller", 00:04:10.549 "bdev_virtio_blk_set_hotplug", 00:04:10.549 "bdev_iscsi_delete", 00:04:10.549 "bdev_iscsi_create", 00:04:10.549 "bdev_iscsi_set_options", 00:04:10.549 "accel_error_inject_error", 00:04:10.549 "ioat_scan_accel_module", 00:04:10.549 "dsa_scan_accel_module", 00:04:10.549 "iaa_scan_accel_module", 00:04:10.549 "keyring_file_remove_key", 00:04:10.549 "keyring_file_add_key", 00:04:10.549 "keyring_linux_set_options", 00:04:10.549 "fsdev_aio_delete", 00:04:10.549 "fsdev_aio_create", 00:04:10.549 "iscsi_get_histogram", 00:04:10.549 "iscsi_enable_histogram", 00:04:10.549 "iscsi_set_options", 00:04:10.549 "iscsi_get_auth_groups", 00:04:10.549 "iscsi_auth_group_remove_secret", 00:04:10.549 "iscsi_auth_group_add_secret", 00:04:10.549 "iscsi_delete_auth_group", 00:04:10.549 "iscsi_create_auth_group", 00:04:10.549 "iscsi_set_discovery_auth", 00:04:10.549 "iscsi_get_options", 00:04:10.549 "iscsi_target_node_request_logout", 00:04:10.549 "iscsi_target_node_set_redirect", 00:04:10.549 "iscsi_target_node_set_auth", 00:04:10.549 "iscsi_target_node_add_lun", 00:04:10.549 "iscsi_get_stats", 00:04:10.549 "iscsi_get_connections", 00:04:10.549 "iscsi_portal_group_set_auth", 00:04:10.549 "iscsi_start_portal_group", 00:04:10.549 "iscsi_delete_portal_group", 00:04:10.549 "iscsi_create_portal_group", 00:04:10.549 "iscsi_get_portal_groups", 00:04:10.549 "iscsi_delete_target_node", 00:04:10.549 "iscsi_target_node_remove_pg_ig_maps", 00:04:10.549 "iscsi_target_node_add_pg_ig_maps", 00:04:10.549 "iscsi_create_target_node", 00:04:10.549 "iscsi_get_target_nodes", 00:04:10.549 "iscsi_delete_initiator_group", 00:04:10.549 "iscsi_initiator_group_remove_initiators", 00:04:10.549 "iscsi_initiator_group_add_initiators", 00:04:10.549 "iscsi_create_initiator_group", 00:04:10.549 "iscsi_get_initiator_groups", 00:04:10.549 "nvmf_set_crdt", 00:04:10.549 "nvmf_set_config", 00:04:10.549 "nvmf_set_max_subsystems", 00:04:10.549 "nvmf_stop_mdns_prr", 00:04:10.549 "nvmf_publish_mdns_prr", 00:04:10.549 "nvmf_subsystem_get_listeners", 00:04:10.549 "nvmf_subsystem_get_qpairs", 00:04:10.549 "nvmf_subsystem_get_controllers", 00:04:10.549 "nvmf_get_stats", 00:04:10.549 "nvmf_get_transports", 00:04:10.549 "nvmf_create_transport", 00:04:10.549 "nvmf_get_targets", 00:04:10.549 "nvmf_delete_target", 00:04:10.549 "nvmf_create_target", 00:04:10.549 "nvmf_subsystem_allow_any_host", 00:04:10.549 "nvmf_subsystem_set_keys", 00:04:10.549 "nvmf_subsystem_remove_host", 00:04:10.549 "nvmf_subsystem_add_host", 00:04:10.549 "nvmf_ns_remove_host", 00:04:10.549 "nvmf_ns_add_host", 00:04:10.549 "nvmf_subsystem_remove_ns", 00:04:10.549 "nvmf_subsystem_set_ns_ana_group", 00:04:10.549 "nvmf_subsystem_add_ns", 00:04:10.549 "nvmf_subsystem_listener_set_ana_state", 00:04:10.549 "nvmf_discovery_get_referrals", 00:04:10.549 "nvmf_discovery_remove_referral", 00:04:10.549 "nvmf_discovery_add_referral", 00:04:10.549 "nvmf_subsystem_remove_listener", 00:04:10.549 "nvmf_subsystem_add_listener", 00:04:10.549 "nvmf_delete_subsystem", 00:04:10.549 "nvmf_create_subsystem", 00:04:10.549 "nvmf_get_subsystems", 00:04:10.549 "env_dpdk_get_mem_stats", 00:04:10.549 "nbd_get_disks", 00:04:10.549 "nbd_stop_disk", 00:04:10.549 "nbd_start_disk", 00:04:10.549 "ublk_recover_disk", 00:04:10.549 "ublk_get_disks", 00:04:10.549 "ublk_stop_disk", 00:04:10.549 "ublk_start_disk", 00:04:10.549 "ublk_destroy_target", 00:04:10.549 "ublk_create_target", 00:04:10.549 "virtio_blk_create_transport", 00:04:10.549 "virtio_blk_get_transports", 00:04:10.549 "vhost_controller_set_coalescing", 00:04:10.549 "vhost_get_controllers", 00:04:10.549 "vhost_delete_controller", 00:04:10.549 "vhost_create_blk_controller", 00:04:10.549 "vhost_scsi_controller_remove_target", 00:04:10.549 "vhost_scsi_controller_add_target", 00:04:10.549 "vhost_start_scsi_controller", 00:04:10.549 "vhost_create_scsi_controller", 00:04:10.549 "thread_set_cpumask", 00:04:10.549 "scheduler_set_options", 00:04:10.549 "framework_get_governor", 00:04:10.549 "framework_get_scheduler", 00:04:10.549 "framework_set_scheduler", 00:04:10.549 "framework_get_reactors", 00:04:10.549 "thread_get_io_channels", 00:04:10.549 "thread_get_pollers", 00:04:10.549 "thread_get_stats", 00:04:10.549 "framework_monitor_context_switch", 00:04:10.549 "spdk_kill_instance", 00:04:10.549 "log_enable_timestamps", 00:04:10.549 "log_get_flags", 00:04:10.549 "log_clear_flag", 00:04:10.549 "log_set_flag", 00:04:10.549 "log_get_level", 00:04:10.549 "log_set_level", 00:04:10.549 "log_get_print_level", 00:04:10.549 "log_set_print_level", 00:04:10.549 "framework_enable_cpumask_locks", 00:04:10.549 "framework_disable_cpumask_locks", 00:04:10.549 "framework_wait_init", 00:04:10.549 "framework_start_init", 00:04:10.549 "scsi_get_devices", 00:04:10.549 "bdev_get_histogram", 00:04:10.549 "bdev_enable_histogram", 00:04:10.549 "bdev_set_qos_limit", 00:04:10.549 "bdev_set_qd_sampling_period", 00:04:10.549 "bdev_get_bdevs", 00:04:10.549 "bdev_reset_iostat", 00:04:10.549 "bdev_get_iostat", 00:04:10.549 "bdev_examine", 00:04:10.549 "bdev_wait_for_examine", 00:04:10.549 "bdev_set_options", 00:04:10.549 "accel_get_stats", 00:04:10.549 "accel_set_options", 00:04:10.549 "accel_set_driver", 00:04:10.549 "accel_crypto_key_destroy", 00:04:10.549 "accel_crypto_keys_get", 00:04:10.549 "accel_crypto_key_create", 00:04:10.549 "accel_assign_opc", 00:04:10.549 "accel_get_module_info", 00:04:10.549 "accel_get_opc_assignments", 00:04:10.549 "vmd_rescan", 00:04:10.549 "vmd_remove_device", 00:04:10.549 "vmd_enable", 00:04:10.549 "sock_get_default_impl", 00:04:10.549 "sock_set_default_impl", 00:04:10.549 "sock_impl_set_options", 00:04:10.549 "sock_impl_get_options", 00:04:10.549 "iobuf_get_stats", 00:04:10.549 "iobuf_set_options", 00:04:10.549 "keyring_get_keys", 00:04:10.549 "framework_get_pci_devices", 00:04:10.549 "framework_get_config", 00:04:10.549 "framework_get_subsystems", 00:04:10.549 "fsdev_set_opts", 00:04:10.549 "fsdev_get_opts", 00:04:10.549 "trace_get_info", 00:04:10.549 "trace_get_tpoint_group_mask", 00:04:10.549 "trace_disable_tpoint_group", 00:04:10.549 "trace_enable_tpoint_group", 00:04:10.549 "trace_clear_tpoint_mask", 00:04:10.549 "trace_set_tpoint_mask", 00:04:10.549 "notify_get_notifications", 00:04:10.549 "notify_get_types", 00:04:10.549 "spdk_get_version", 00:04:10.549 "rpc_get_methods" 00:04:10.549 ] 00:04:10.549 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.549 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:10.549 01:24:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58046 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58046 ']' 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58046 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58046 00:04:10.549 killing process with pid 58046 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58046' 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58046 00:04:10.549 01:24:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58046 00:04:11.967 00:04:11.967 real 0m2.754s 00:04:11.967 user 0m4.945s 00:04:11.967 sys 0m0.453s 00:04:11.967 ************************************ 00:04:11.967 END TEST spdkcli_tcp 00:04:11.967 ************************************ 00:04:11.967 01:24:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.967 01:24:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:12.250 01:24:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.250 01:24:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.250 01:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.250 01:24:20 -- common/autotest_common.sh@10 -- # set +x 00:04:12.250 ************************************ 00:04:12.250 START TEST dpdk_mem_utility 00:04:12.250 ************************************ 00:04:12.250 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.250 * Looking for test storage... 00:04:12.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:12.250 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.250 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.250 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.250 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.250 01:24:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.251 01:24:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.251 --rc genhtml_branch_coverage=1 00:04:12.251 --rc genhtml_function_coverage=1 00:04:12.251 --rc genhtml_legend=1 00:04:12.251 --rc geninfo_all_blocks=1 00:04:12.251 --rc geninfo_unexecuted_blocks=1 00:04:12.251 00:04:12.251 ' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.251 --rc genhtml_branch_coverage=1 00:04:12.251 --rc genhtml_function_coverage=1 00:04:12.251 --rc genhtml_legend=1 00:04:12.251 --rc geninfo_all_blocks=1 00:04:12.251 --rc geninfo_unexecuted_blocks=1 00:04:12.251 00:04:12.251 ' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.251 --rc genhtml_branch_coverage=1 00:04:12.251 --rc genhtml_function_coverage=1 00:04:12.251 --rc genhtml_legend=1 00:04:12.251 --rc geninfo_all_blocks=1 00:04:12.251 --rc geninfo_unexecuted_blocks=1 00:04:12.251 00:04:12.251 ' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.251 --rc genhtml_branch_coverage=1 00:04:12.251 --rc genhtml_function_coverage=1 00:04:12.251 --rc genhtml_legend=1 00:04:12.251 --rc geninfo_all_blocks=1 00:04:12.251 --rc geninfo_unexecuted_blocks=1 00:04:12.251 00:04:12.251 ' 00:04:12.251 01:24:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:12.251 01:24:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58157 00:04:12.251 01:24:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58157 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58157 ']' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.251 01:24:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.251 01:24:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.251 [2024-11-17 01:24:20.672102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:12.251 [2024-11-17 01:24:20.672228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58157 ] 00:04:12.509 [2024-11-17 01:24:20.829154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.509 [2024-11-17 01:24:20.905164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.078 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.078 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:13.078 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:13.078 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:13.078 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.078 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:13.078 { 00:04:13.078 "filename": "/tmp/spdk_mem_dump.txt" 00:04:13.078 } 00:04:13.078 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.078 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:13.341 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:13.341 1 heaps totaling size 816.000000 MiB 00:04:13.341 size: 816.000000 MiB heap id: 0 00:04:13.341 end heaps---------- 00:04:13.341 9 mempools totaling size 595.772034 MiB 00:04:13.341 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:13.341 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:13.341 size: 92.545471 MiB name: bdev_io_58157 00:04:13.341 size: 50.003479 MiB name: msgpool_58157 00:04:13.341 size: 36.509338 MiB name: fsdev_io_58157 00:04:13.341 size: 21.763794 MiB name: PDU_Pool 00:04:13.341 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:13.341 size: 4.133484 MiB name: evtpool_58157 00:04:13.341 size: 0.026123 MiB name: Session_Pool 00:04:13.341 end mempools------- 00:04:13.341 6 memzones totaling size 4.142822 MiB 00:04:13.341 size: 1.000366 MiB name: RG_ring_0_58157 00:04:13.341 size: 1.000366 MiB name: RG_ring_1_58157 00:04:13.341 size: 1.000366 MiB name: RG_ring_4_58157 00:04:13.341 size: 1.000366 MiB name: RG_ring_5_58157 00:04:13.341 size: 0.125366 MiB name: RG_ring_2_58157 00:04:13.341 size: 0.015991 MiB name: RG_ring_3_58157 00:04:13.341 end memzones------- 00:04:13.341 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:13.341 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:04:13.341 list of free elements. size: 16.789429 MiB 00:04:13.341 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:13.341 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:13.341 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:13.341 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:13.341 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:13.341 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:13.341 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:13.341 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:13.341 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:13.341 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:13.341 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:13.341 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:04:13.341 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:13.341 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:13.341 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:13.341 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:13.341 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:13.341 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:13.341 list of standard malloc elements. size: 199.289673 MiB 00:04:13.341 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:13.341 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:13.341 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:13.341 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:13.341 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:13.341 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:13.341 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:13.341 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:13.341 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:13.341 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:13.341 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:13.341 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:13.341 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:13.341 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:13.342 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:13.343 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:13.343 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:13.343 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:13.344 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:13.344 list of memzone associated elements. size: 599.920898 MiB 00:04:13.344 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:13.344 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:13.344 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:13.344 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:13.344 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:13.344 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58157_0 00:04:13.344 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:13.344 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58157_0 00:04:13.344 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:13.344 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58157_0 00:04:13.344 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:13.344 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:13.344 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:13.344 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:13.344 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:13.344 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58157_0 00:04:13.344 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:13.344 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58157 00:04:13.344 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:13.344 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58157 00:04:13.344 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:13.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:13.344 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:13.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:13.344 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:13.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:13.344 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:13.344 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:13.344 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:13.344 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58157 00:04:13.344 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:13.344 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58157 00:04:13.344 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:13.344 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58157 00:04:13.344 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:13.344 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58157 00:04:13.344 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:13.344 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58157 00:04:13.344 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:13.344 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58157 00:04:13.344 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:13.344 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:13.344 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:13.344 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:13.344 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:13.344 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:13.344 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:13.344 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58157 00:04:13.344 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:13.344 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58157 00:04:13.344 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:13.344 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:13.344 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:13.344 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:13.344 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:13.344 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58157 00:04:13.344 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:13.344 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:13.344 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:13.344 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58157 00:04:13.344 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:13.344 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58157 00:04:13.344 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:13.344 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58157 00:04:13.344 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:13.344 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:13.344 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:13.344 01:24:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58157 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58157 ']' 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58157 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58157 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.344 killing process with pid 58157 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58157' 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58157 00:04:13.344 01:24:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58157 00:04:14.730 00:04:14.730 real 0m2.348s 00:04:14.730 user 0m2.400s 00:04:14.730 sys 0m0.353s 00:04:14.730 ************************************ 00:04:14.730 END TEST dpdk_mem_utility 00:04:14.730 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.730 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:14.730 ************************************ 00:04:14.730 01:24:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.730 01:24:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.730 01:24:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.730 01:24:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.730 ************************************ 00:04:14.730 START TEST event 00:04:14.730 ************************************ 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.731 * Looking for test storage... 00:04:14.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.731 01:24:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.731 01:24:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.731 01:24:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.731 01:24:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.731 01:24:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.731 01:24:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.731 01:24:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.731 01:24:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.731 01:24:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.731 01:24:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.731 01:24:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.731 01:24:22 event -- scripts/common.sh@344 -- # case "$op" in 00:04:14.731 01:24:22 event -- scripts/common.sh@345 -- # : 1 00:04:14.731 01:24:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.731 01:24:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.731 01:24:22 event -- scripts/common.sh@365 -- # decimal 1 00:04:14.731 01:24:22 event -- scripts/common.sh@353 -- # local d=1 00:04:14.731 01:24:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.731 01:24:22 event -- scripts/common.sh@355 -- # echo 1 00:04:14.731 01:24:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.731 01:24:22 event -- scripts/common.sh@366 -- # decimal 2 00:04:14.731 01:24:22 event -- scripts/common.sh@353 -- # local d=2 00:04:14.731 01:24:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.731 01:24:22 event -- scripts/common.sh@355 -- # echo 2 00:04:14.731 01:24:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.731 01:24:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.731 01:24:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.731 01:24:22 event -- scripts/common.sh@368 -- # return 0 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.731 --rc genhtml_branch_coverage=1 00:04:14.731 --rc genhtml_function_coverage=1 00:04:14.731 --rc genhtml_legend=1 00:04:14.731 --rc geninfo_all_blocks=1 00:04:14.731 --rc geninfo_unexecuted_blocks=1 00:04:14.731 00:04:14.731 ' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.731 --rc genhtml_branch_coverage=1 00:04:14.731 --rc genhtml_function_coverage=1 00:04:14.731 --rc genhtml_legend=1 00:04:14.731 --rc geninfo_all_blocks=1 00:04:14.731 --rc geninfo_unexecuted_blocks=1 00:04:14.731 00:04:14.731 ' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.731 --rc genhtml_branch_coverage=1 00:04:14.731 --rc genhtml_function_coverage=1 00:04:14.731 --rc genhtml_legend=1 00:04:14.731 --rc geninfo_all_blocks=1 00:04:14.731 --rc geninfo_unexecuted_blocks=1 00:04:14.731 00:04:14.731 ' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.731 --rc genhtml_branch_coverage=1 00:04:14.731 --rc genhtml_function_coverage=1 00:04:14.731 --rc genhtml_legend=1 00:04:14.731 --rc geninfo_all_blocks=1 00:04:14.731 --rc geninfo_unexecuted_blocks=1 00:04:14.731 00:04:14.731 ' 00:04:14.731 01:24:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:14.731 01:24:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.731 01:24:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:14.731 01:24:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.731 01:24:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.731 ************************************ 00:04:14.731 START TEST event_perf 00:04:14.731 ************************************ 00:04:14.731 01:24:22 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.731 Running I/O for 1 seconds...[2024-11-17 01:24:23.023150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:14.731 [2024-11-17 01:24:23.023258] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58243 ] 00:04:14.731 [2024-11-17 01:24:23.178460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.992 [2024-11-17 01:24:23.263222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.992 Running I/O for 1 seconds...[2024-11-17 01:24:23.263539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.992 [2024-11-17 01:24:23.263591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.992 [2024-11-17 01:24:23.263619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.935 00:04:15.935 lcore 0: 205578 00:04:15.935 lcore 1: 205581 00:04:15.935 lcore 2: 205580 00:04:15.935 lcore 3: 205578 00:04:15.935 done. 00:04:16.195 00:04:16.195 real 0m1.398s 00:04:16.195 user 0m4.207s 00:04:16.195 sys 0m0.075s 00:04:16.195 ************************************ 00:04:16.195 END TEST event_perf 00:04:16.195 ************************************ 00:04:16.195 01:24:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.195 01:24:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.195 01:24:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:16.195 01:24:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:16.195 01:24:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.195 01:24:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.195 ************************************ 00:04:16.195 START TEST event_reactor 00:04:16.195 ************************************ 00:04:16.195 01:24:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:16.195 [2024-11-17 01:24:24.476359] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:16.195 [2024-11-17 01:24:24.476468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:04:16.195 [2024-11-17 01:24:24.642879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.456 [2024-11-17 01:24:24.744738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.842 test_start 00:04:17.842 oneshot 00:04:17.842 tick 100 00:04:17.842 tick 100 00:04:17.842 tick 250 00:04:17.842 tick 100 00:04:17.842 tick 100 00:04:17.842 tick 100 00:04:17.842 tick 250 00:04:17.842 tick 500 00:04:17.842 tick 100 00:04:17.842 tick 100 00:04:17.842 tick 250 00:04:17.842 tick 100 00:04:17.842 tick 100 00:04:17.842 test_end 00:04:17.842 00:04:17.842 real 0m1.451s 00:04:17.842 user 0m1.275s 00:04:17.842 sys 0m0.066s 00:04:17.842 01:24:25 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.842 01:24:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:17.842 ************************************ 00:04:17.842 END TEST event_reactor 00:04:17.842 ************************************ 00:04:17.842 01:24:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.842 01:24:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:17.842 01:24:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.842 01:24:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.842 ************************************ 00:04:17.842 START TEST event_reactor_perf 00:04:17.842 ************************************ 00:04:17.843 01:24:25 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.843 [2024-11-17 01:24:25.969193] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:17.843 [2024-11-17 01:24:25.969607] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:04:17.843 [2024-11-17 01:24:26.126873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.843 [2024-11-17 01:24:26.220207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.226 test_start 00:04:19.226 test_end 00:04:19.226 Performance: 312695 events per second 00:04:19.226 00:04:19.226 real 0m1.435s 00:04:19.226 user 0m1.256s 00:04:19.226 sys 0m0.071s 00:04:19.226 01:24:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.226 01:24:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:19.226 ************************************ 00:04:19.226 END TEST event_reactor_perf 00:04:19.226 ************************************ 00:04:19.226 01:24:27 event -- event/event.sh@49 -- # uname -s 00:04:19.226 01:24:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:19.226 01:24:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:19.226 01:24:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.226 01:24:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.226 01:24:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.226 ************************************ 00:04:19.226 START TEST event_scheduler 00:04:19.226 ************************************ 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:19.226 * Looking for test storage... 00:04:19.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.226 01:24:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.226 --rc genhtml_branch_coverage=1 00:04:19.226 --rc genhtml_function_coverage=1 00:04:19.226 --rc genhtml_legend=1 00:04:19.226 --rc geninfo_all_blocks=1 00:04:19.226 --rc geninfo_unexecuted_blocks=1 00:04:19.226 00:04:19.226 ' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.226 --rc genhtml_branch_coverage=1 00:04:19.226 --rc genhtml_function_coverage=1 00:04:19.226 --rc genhtml_legend=1 00:04:19.226 --rc geninfo_all_blocks=1 00:04:19.226 --rc geninfo_unexecuted_blocks=1 00:04:19.226 00:04:19.226 ' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.226 --rc genhtml_branch_coverage=1 00:04:19.226 --rc genhtml_function_coverage=1 00:04:19.226 --rc genhtml_legend=1 00:04:19.226 --rc geninfo_all_blocks=1 00:04:19.226 --rc geninfo_unexecuted_blocks=1 00:04:19.226 00:04:19.226 ' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.226 --rc genhtml_branch_coverage=1 00:04:19.226 --rc genhtml_function_coverage=1 00:04:19.226 --rc genhtml_legend=1 00:04:19.226 --rc geninfo_all_blocks=1 00:04:19.226 --rc geninfo_unexecuted_blocks=1 00:04:19.226 00:04:19.226 ' 00:04:19.226 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:19.226 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58389 00:04:19.226 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.226 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58389 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.226 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:19.226 01:24:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.226 [2024-11-17 01:24:27.632176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:19.226 [2024-11-17 01:24:27.632300] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:04:19.487 [2024-11-17 01:24:27.790225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.487 [2024-11-17 01:24:27.891404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.487 [2024-11-17 01:24:27.891731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.487 [2024-11-17 01:24:27.891889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.487 [2024-11-17 01:24:27.891900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.060 01:24:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.060 01:24:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:20.060 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.060 01:24:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.060 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.322 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.322 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.322 POWER: Cannot set governor of lcore 0 to performance 00:04:20.322 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.322 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.322 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:20.322 POWER: Cannot set governor of lcore 0 to userspace 00:04:20.322 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:20.322 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:20.322 POWER: Unable to set Power Management Environment for lcore 0 00:04:20.322 [2024-11-17 01:24:28.521723] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:20.322 [2024-11-17 01:24:28.521778] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:20.322 [2024-11-17 01:24:28.521844] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:20.322 [2024-11-17 01:24:28.521911] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:20.322 [2024-11-17 01:24:28.521955] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:20.322 [2024-11-17 01:24:28.521997] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.322 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 [2024-11-17 01:24:28.738842] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.322 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.322 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 ************************************ 00:04:20.322 START TEST scheduler_create_thread 00:04:20.322 ************************************ 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 2 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.322 3 00:04:20.322 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.323 4 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.323 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.584 5 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.584 6 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.584 7 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.584 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 8 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 9 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 10 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.585 00:04:20.585 real 0m0.106s 00:04:20.585 user 0m0.010s 00:04:20.585 sys 0m0.007s 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.585 ************************************ 00:04:20.585 END TEST scheduler_create_thread 00:04:20.585 ************************************ 00:04:20.585 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.585 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:20.585 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58389 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58389 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58389 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:20.585 killing process with pid 58389 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58389' 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58389 00:04:20.585 01:24:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58389 00:04:21.152 [2024-11-17 01:24:29.339758] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:21.728 00:04:21.728 real 0m2.503s 00:04:21.728 user 0m4.515s 00:04:21.728 sys 0m0.323s 00:04:21.728 ************************************ 00:04:21.728 END TEST event_scheduler 00:04:21.728 ************************************ 00:04:21.728 01:24:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.728 01:24:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.728 01:24:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:21.728 01:24:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:21.728 01:24:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.728 01:24:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.728 01:24:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.728 ************************************ 00:04:21.728 START TEST app_repeat 00:04:21.728 ************************************ 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58468 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.728 Process app_repeat pid: 58468 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58468' 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.728 spdk_app_start Round 0 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58468 /var/tmp/spdk-nbd.sock 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58468 ']' 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.728 01:24:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:21.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.728 01:24:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:21.728 [2024-11-17 01:24:30.010283] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:21.728 [2024-11-17 01:24:30.010401] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58468 ] 00:04:21.728 [2024-11-17 01:24:30.167801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.987 [2024-11-17 01:24:30.262988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.987 [2024-11-17 01:24:30.263078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.553 01:24:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.553 01:24:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:22.553 01:24:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.811 Malloc0 00:04:22.811 01:24:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.070 Malloc1 00:04:23.070 01:24:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.070 01:24:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.328 /dev/nbd0 00:04:23.328 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.328 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.328 1+0 records in 00:04:23.328 1+0 records out 00:04:23.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155728 s, 26.3 MB/s 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.328 01:24:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.328 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.328 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.328 01:24:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.586 /dev/nbd1 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.586 1+0 records in 00:04:23.586 1+0 records out 00:04:23.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477842 s, 8.6 MB/s 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.586 01:24:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.586 01:24:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.844 { 00:04:23.844 "nbd_device": "/dev/nbd0", 00:04:23.844 "bdev_name": "Malloc0" 00:04:23.844 }, 00:04:23.844 { 00:04:23.844 "nbd_device": "/dev/nbd1", 00:04:23.844 "bdev_name": "Malloc1" 00:04:23.844 } 00:04:23.844 ]' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.844 { 00:04:23.844 "nbd_device": "/dev/nbd0", 00:04:23.844 "bdev_name": "Malloc0" 00:04:23.844 }, 00:04:23.844 { 00:04:23.844 "nbd_device": "/dev/nbd1", 00:04:23.844 "bdev_name": "Malloc1" 00:04:23.844 } 00:04:23.844 ]' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.844 /dev/nbd1' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.844 /dev/nbd1' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.844 256+0 records in 00:04:23.844 256+0 records out 00:04:23.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100435 s, 104 MB/s 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.844 256+0 records in 00:04:23.844 256+0 records out 00:04:23.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236541 s, 44.3 MB/s 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.844 256+0 records in 00:04:23.844 256+0 records out 00:04:23.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241737 s, 43.4 MB/s 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.844 01:24:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.103 01:24:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.362 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.620 01:24:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.620 01:24:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.879 01:24:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.446 [2024-11-17 01:24:33.700886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.446 [2024-11-17 01:24:33.773917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.446 [2024-11-17 01:24:33.774071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.446 [2024-11-17 01:24:33.869291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.446 [2024-11-17 01:24:33.869330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.976 01:24:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:27.976 spdk_app_start Round 1 00:04:27.976 01:24:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:27.976 01:24:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58468 /var/tmp/spdk-nbd.sock 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58468 ']' 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.976 01:24:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:27.976 01:24:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.234 Malloc0 00:04:28.234 01:24:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.491 Malloc1 00:04:28.491 01:24:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.491 01:24:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.748 /dev/nbd0 00:04:28.748 01:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.748 01:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.748 01:24:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.749 1+0 records in 00:04:28.749 1+0 records out 00:04:28.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224102 s, 18.3 MB/s 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:28.749 01:24:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:28.749 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.749 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.749 01:24:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.006 /dev/nbd1 00:04:29.006 01:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.007 1+0 records in 00:04:29.007 1+0 records out 00:04:29.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028888 s, 14.2 MB/s 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.007 01:24:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.007 01:24:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.264 01:24:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:29.264 { 00:04:29.265 "nbd_device": "/dev/nbd0", 00:04:29.265 "bdev_name": "Malloc0" 00:04:29.265 }, 00:04:29.265 { 00:04:29.265 "nbd_device": "/dev/nbd1", 00:04:29.265 "bdev_name": "Malloc1" 00:04:29.265 } 00:04:29.265 ]' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:29.265 { 00:04:29.265 "nbd_device": "/dev/nbd0", 00:04:29.265 "bdev_name": "Malloc0" 00:04:29.265 }, 00:04:29.265 { 00:04:29.265 "nbd_device": "/dev/nbd1", 00:04:29.265 "bdev_name": "Malloc1" 00:04:29.265 } 00:04:29.265 ]' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.265 /dev/nbd1' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.265 /dev/nbd1' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.265 256+0 records in 00:04:29.265 256+0 records out 00:04:29.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721178 s, 145 MB/s 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.265 256+0 records in 00:04:29.265 256+0 records out 00:04:29.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156031 s, 67.2 MB/s 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.265 256+0 records in 00:04:29.265 256+0 records out 00:04:29.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189804 s, 55.2 MB/s 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.265 01:24:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.523 01:24:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.781 01:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.039 01:24:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.039 01:24:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.298 01:24:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.864 [2024-11-17 01:24:39.291181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.121 [2024-11-17 01:24:39.378750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.121 [2024-11-17 01:24:39.378832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.121 [2024-11-17 01:24:39.507709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.121 [2024-11-17 01:24:39.507772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.653 spdk_app_start Round 2 00:04:33.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.653 01:24:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.653 01:24:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:33.653 01:24:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58468 /var/tmp/spdk-nbd.sock 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58468 ']' 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.653 01:24:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.653 01:24:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.653 Malloc0 00:04:33.653 01:24:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.912 Malloc1 00:04:33.912 01:24:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.912 01:24:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.170 /dev/nbd0 00:04:34.170 01:24:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.170 01:24:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.170 1+0 records in 00:04:34.170 1+0 records out 00:04:34.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248571 s, 16.5 MB/s 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.170 01:24:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.170 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.170 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.170 01:24:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.429 /dev/nbd1 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.429 1+0 records in 00:04:34.429 1+0 records out 00:04:34.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240397 s, 17.0 MB/s 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.429 01:24:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.429 01:24:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.688 { 00:04:34.688 "nbd_device": "/dev/nbd0", 00:04:34.688 "bdev_name": "Malloc0" 00:04:34.688 }, 00:04:34.688 { 00:04:34.688 "nbd_device": "/dev/nbd1", 00:04:34.688 "bdev_name": "Malloc1" 00:04:34.688 } 00:04:34.688 ]' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.688 { 00:04:34.688 "nbd_device": "/dev/nbd0", 00:04:34.688 "bdev_name": "Malloc0" 00:04:34.688 }, 00:04:34.688 { 00:04:34.688 "nbd_device": "/dev/nbd1", 00:04:34.688 "bdev_name": "Malloc1" 00:04:34.688 } 00:04:34.688 ]' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.688 /dev/nbd1' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.688 /dev/nbd1' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.688 256+0 records in 00:04:34.688 256+0 records out 00:04:34.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473971 s, 221 MB/s 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.688 01:24:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.688 256+0 records in 00:04:34.688 256+0 records out 00:04:34.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127397 s, 82.3 MB/s 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.688 256+0 records in 00:04:34.688 256+0 records out 00:04:34.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164599 s, 63.7 MB/s 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.688 01:24:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.947 01:24:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.207 01:24:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.467 01:24:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.467 01:24:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.727 01:24:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.299 [2024-11-17 01:24:44.540124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.299 [2024-11-17 01:24:44.608209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.299 [2024-11-17 01:24:44.608215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.299 [2024-11-17 01:24:44.704414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.299 [2024-11-17 01:24:44.704458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.833 01:24:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58468 /var/tmp/spdk-nbd.sock 00:04:38.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58468 ']' 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.833 01:24:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.833 01:24:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58468 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58468 ']' 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58468 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58468 00:04:38.833 killing process with pid 58468 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58468' 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58468 00:04:38.833 01:24:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58468 00:04:39.400 spdk_app_start is called in Round 0. 00:04:39.400 Shutdown signal received, stop current app iteration 00:04:39.400 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:39.400 spdk_app_start is called in Round 1. 00:04:39.400 Shutdown signal received, stop current app iteration 00:04:39.400 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:39.400 spdk_app_start is called in Round 2. 00:04:39.400 Shutdown signal received, stop current app iteration 00:04:39.400 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:39.400 spdk_app_start is called in Round 3. 00:04:39.400 Shutdown signal received, stop current app iteration 00:04:39.400 ************************************ 00:04:39.400 END TEST app_repeat 00:04:39.400 ************************************ 00:04:39.400 01:24:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:39.400 01:24:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:39.400 00:04:39.400 real 0m17.776s 00:04:39.400 user 0m38.966s 00:04:39.400 sys 0m2.062s 00:04:39.400 01:24:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.400 01:24:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.400 01:24:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:39.400 01:24:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:39.400 01:24:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.400 01:24:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.400 01:24:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.400 ************************************ 00:04:39.400 START TEST cpu_locks 00:04:39.400 ************************************ 00:04:39.400 01:24:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:39.400 * Looking for test storage... 00:04:39.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:39.400 01:24:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.400 01:24:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.400 01:24:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.659 01:24:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.659 01:24:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:39.659 01:24:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.659 01:24:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.659 --rc genhtml_branch_coverage=1 00:04:39.659 --rc genhtml_function_coverage=1 00:04:39.659 --rc genhtml_legend=1 00:04:39.659 --rc geninfo_all_blocks=1 00:04:39.659 --rc geninfo_unexecuted_blocks=1 00:04:39.659 00:04:39.659 ' 00:04:39.659 01:24:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.659 --rc genhtml_branch_coverage=1 00:04:39.659 --rc genhtml_function_coverage=1 00:04:39.659 --rc genhtml_legend=1 00:04:39.659 --rc geninfo_all_blocks=1 00:04:39.659 --rc geninfo_unexecuted_blocks=1 00:04:39.659 00:04:39.659 ' 00:04:39.659 01:24:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.660 --rc genhtml_branch_coverage=1 00:04:39.660 --rc genhtml_function_coverage=1 00:04:39.660 --rc genhtml_legend=1 00:04:39.660 --rc geninfo_all_blocks=1 00:04:39.660 --rc geninfo_unexecuted_blocks=1 00:04:39.660 00:04:39.660 ' 00:04:39.660 01:24:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.660 --rc genhtml_branch_coverage=1 00:04:39.660 --rc genhtml_function_coverage=1 00:04:39.660 --rc genhtml_legend=1 00:04:39.660 --rc geninfo_all_blocks=1 00:04:39.660 --rc geninfo_unexecuted_blocks=1 00:04:39.660 00:04:39.660 ' 00:04:39.660 01:24:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:39.660 01:24:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:39.660 01:24:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:39.660 01:24:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:39.660 01:24:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.660 01:24:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.660 01:24:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.660 ************************************ 00:04:39.660 START TEST default_locks 00:04:39.660 ************************************ 00:04:39.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58893 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58893 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58893 ']' 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.660 01:24:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.660 [2024-11-17 01:24:47.988241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:39.660 [2024-11-17 01:24:47.988339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:04:39.918 [2024-11-17 01:24:48.137899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.918 [2024-11-17 01:24:48.220690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.534 01:24:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.534 01:24:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:40.534 01:24:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58893 00:04:40.534 01:24:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58893 00:04:40.534 01:24:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.820 01:24:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58893 00:04:40.820 01:24:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58893 ']' 00:04:40.820 01:24:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58893 00:04:40.820 01:24:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:40.820 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.820 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58893 00:04:40.821 killing process with pid 58893 00:04:40.821 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.821 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.821 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58893' 00:04:40.821 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58893 00:04:40.821 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58893 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58893 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58893 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58893 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58893 ']' 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.760 ERROR: process (pid: 58893) is no longer running 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58893) - No such process 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:41.760 00:04:41.760 real 0m2.254s 00:04:41.760 user 0m2.250s 00:04:41.760 sys 0m0.437s 00:04:41.760 ************************************ 00:04:41.760 END TEST default_locks 00:04:41.760 ************************************ 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.760 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.760 01:24:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:41.760 01:24:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.760 01:24:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.760 01:24:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.760 ************************************ 00:04:41.760 START TEST default_locks_via_rpc 00:04:41.760 ************************************ 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58957 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58957 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58957 ']' 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.760 01:24:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.018 [2024-11-17 01:24:50.278743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:42.019 [2024-11-17 01:24:50.278978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:04:42.019 [2024-11-17 01:24:50.419004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.277 [2024-11-17 01:24:50.496028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.843 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.844 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.844 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58957 00:04:42.844 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58957 00:04:42.844 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.101 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58957 00:04:43.101 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58957 ']' 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58957 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58957 00:04:43.102 killing process with pid 58957 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58957' 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58957 00:04:43.102 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58957 00:04:44.477 00:04:44.477 real 0m2.308s 00:04:44.477 user 0m2.318s 00:04:44.477 sys 0m0.410s 00:04:44.477 01:24:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.477 ************************************ 00:04:44.477 END TEST default_locks_via_rpc 00:04:44.477 ************************************ 00:04:44.477 01:24:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.477 01:24:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:44.477 01:24:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.477 01:24:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.477 01:24:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.477 ************************************ 00:04:44.477 START TEST non_locking_app_on_locked_coremask 00:04:44.477 ************************************ 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59009 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59009 /var/tmp/spdk.sock 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59009 ']' 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.477 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.477 [2024-11-17 01:24:52.626450] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:44.477 [2024-11-17 01:24:52.626544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59009 ] 00:04:44.477 [2024-11-17 01:24:52.774746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.478 [2024-11-17 01:24:52.852928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59025 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59025 /var/tmp/spdk2.sock 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59025 ']' 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.044 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.301 [2024-11-17 01:24:53.546639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:45.301 [2024-11-17 01:24:53.547181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:04:45.301 [2024-11-17 01:24:53.710284] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.301 [2024-11-17 01:24:53.710331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.559 [2024-11-17 01:24:53.871772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.492 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.492 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:46.492 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59009 00:04:46.492 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59009 00:04:46.492 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59009 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59009 ']' 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59009 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59009 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.751 killing process with pid 59009 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59009' 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59009 00:04:46.751 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59009 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59025 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59025 ']' 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59025 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59025 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.281 killing process with pid 59025 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59025' 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59025 00:04:49.281 01:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59025 00:04:50.691 00:04:50.691 real 0m6.164s 00:04:50.691 user 0m6.406s 00:04:50.691 sys 0m0.831s 00:04:50.691 ************************************ 00:04:50.691 END TEST non_locking_app_on_locked_coremask 00:04:50.691 ************************************ 00:04:50.691 01:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.691 01:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.691 01:24:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.691 01:24:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.691 01:24:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.691 01:24:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.691 ************************************ 00:04:50.691 START TEST locking_app_on_unlocked_coremask 00:04:50.691 ************************************ 00:04:50.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59116 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59116 /var/tmp/spdk.sock 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59116 ']' 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.692 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.692 [2024-11-17 01:24:58.857367] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:50.692 [2024-11-17 01:24:58.857654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:04:50.692 [2024-11-17 01:24:59.013522] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.692 [2024-11-17 01:24:59.013718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.692 [2024-11-17 01:24:59.091842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59132 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59132 /var/tmp/spdk2.sock 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59132 ']' 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.258 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 [2024-11-17 01:24:59.762251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:51.517 [2024-11-17 01:24:59.762370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:04:51.517 [2024-11-17 01:24:59.926360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.775 [2024-11-17 01:25:00.086636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.710 01:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.710 01:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.710 01:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59132 00:04:52.710 01:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59132 00:04:52.710 01:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59116 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59116 ']' 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59116 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59116 00:04:52.968 killing process with pid 59116 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59116' 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59116 00:04:52.968 01:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59116 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59132 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59132 ']' 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59132 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59132 00:04:55.512 killing process with pid 59132 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59132' 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59132 00:04:55.512 01:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59132 00:04:56.890 ************************************ 00:04:56.890 END TEST locking_app_on_unlocked_coremask 00:04:56.890 ************************************ 00:04:56.890 00:04:56.890 real 0m6.144s 00:04:56.890 user 0m6.444s 00:04:56.890 sys 0m0.835s 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.890 01:25:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:56.890 01:25:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.890 01:25:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.890 01:25:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.890 ************************************ 00:04:56.890 START TEST locking_app_on_locked_coremask 00:04:56.890 ************************************ 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59223 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59223 /var/tmp/spdk.sock 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59223 ']' 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.890 01:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.890 [2024-11-17 01:25:05.042062] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:56.890 [2024-11-17 01:25:05.042181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59223 ] 00:04:56.890 [2024-11-17 01:25:05.202163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.890 [2024-11-17 01:25:05.296362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59239 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59239 /var/tmp/spdk2.sock 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59239 /var/tmp/spdk2.sock 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59239 /var/tmp/spdk2.sock 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59239 ']' 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.457 01:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.716 [2024-11-17 01:25:05.944305] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:57.716 [2024-11-17 01:25:05.944449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:04:57.716 [2024-11-17 01:25:06.118014] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59223 has claimed it. 00:04:57.716 [2024-11-17 01:25:06.118079] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.283 ERROR: process (pid: 59239) is no longer running 00:04:58.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59239) - No such process 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59223 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59223 00:04:58.283 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59223 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59223 ']' 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59223 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59223 00:04:58.559 killing process with pid 59223 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59223' 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59223 00:04:58.559 01:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59223 00:04:59.973 00:04:59.973 real 0m3.201s 00:04:59.973 user 0m3.430s 00:04:59.973 sys 0m0.580s 00:04:59.973 01:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.973 01:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.973 ************************************ 00:04:59.973 END TEST locking_app_on_locked_coremask 00:04:59.973 ************************************ 00:04:59.973 01:25:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:59.973 01:25:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.973 01:25:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.973 01:25:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.973 ************************************ 00:04:59.973 START TEST locking_overlapped_coremask 00:04:59.973 ************************************ 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59292 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59292 /var/tmp/spdk.sock 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59292 ']' 00:04:59.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:59.974 01:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.974 [2024-11-17 01:25:08.281424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:59.974 [2024-11-17 01:25:08.281532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:05:00.233 [2024-11-17 01:25:08.438008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.233 [2024-11-17 01:25:08.518503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.233 [2024-11-17 01:25:08.518719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.233 [2024-11-17 01:25:08.518809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59310 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59310 /var/tmp/spdk2.sock 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59310 /var/tmp/spdk2.sock 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59310 /var/tmp/spdk2.sock 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59310 ']' 00:05:00.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.800 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.800 [2024-11-17 01:25:09.145412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.800 [2024-11-17 01:25:09.145734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:05:01.058 [2024-11-17 01:25:09.325160] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59292 has claimed it. 00:05:01.058 [2024-11-17 01:25:09.325219] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.626 ERROR: process (pid: 59310) is no longer running 00:05:01.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59310) - No such process 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59292 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59292 ']' 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59292 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59292 00:05:01.627 killing process with pid 59292 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59292' 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59292 00:05:01.627 01:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59292 00:05:02.561 00:05:02.561 real 0m2.767s 00:05:02.561 user 0m7.534s 00:05:02.561 sys 0m0.398s 00:05:02.561 ************************************ 00:05:02.561 END TEST locking_overlapped_coremask 00:05:02.561 ************************************ 00:05:02.561 01:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.561 01:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 01:25:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:02.561 01:25:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.561 01:25:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.562 01:25:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.562 ************************************ 00:05:02.562 START TEST locking_overlapped_coremask_via_rpc 00:05:02.562 ************************************ 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59363 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59363 ']' 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.562 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:02.821 [2024-11-17 01:25:11.083186] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:02.821 [2024-11-17 01:25:11.083583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:05:02.821 [2024-11-17 01:25:11.238279] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.821 [2024-11-17 01:25:11.238474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.080 [2024-11-17 01:25:11.325047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.080 [2024-11-17 01:25:11.325318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.080 [2024-11-17 01:25:11.325342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59381 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59381 /var/tmp/spdk2.sock 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59381 ']' 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.646 01:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.646 [2024-11-17 01:25:11.985396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:03.646 [2024-11-17 01:25:11.985838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59381 ] 00:05:03.905 [2024-11-17 01:25:12.150299] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.905 [2024-11-17 01:25:12.150342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.905 [2024-11-17 01:25:12.312872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.905 [2024-11-17 01:25:12.315960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.905 [2024-11-17 01:25:12.315981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.843 [2024-11-17 01:25:13.259971] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59363 has claimed it. 00:05:04.843 request: 00:05:04.843 { 00:05:04.843 "method": "framework_enable_cpumask_locks", 00:05:04.843 "req_id": 1 00:05:04.843 } 00:05:04.843 Got JSON-RPC error response 00:05:04.843 response: 00:05:04.843 { 00:05:04.843 "code": -32603, 00:05:04.843 "message": "Failed to claim CPU core: 2" 00:05:04.843 } 00:05:04.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59363 ']' 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.843 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59381 /var/tmp/spdk2.sock 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59381 ']' 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.102 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.361 ************************************ 00:05:05.361 END TEST locking_overlapped_coremask_via_rpc 00:05:05.361 ************************************ 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.361 00:05:05.361 real 0m2.673s 00:05:05.361 user 0m1.051s 00:05:05.361 sys 0m0.144s 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.361 01:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.361 01:25:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:05.361 01:25:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:05:05.361 01:25:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59363 ']' 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59363 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59363 00:05:05.361 killing process with pid 59363 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59363' 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59363 00:05:05.361 01:25:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59363 00:05:06.735 01:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59381 ]] 00:05:06.735 01:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59381 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59381 ']' 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59381 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59381 00:05:06.735 killing process with pid 59381 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:06.735 01:25:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59381' 00:05:06.736 01:25:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59381 00:05:06.736 01:25:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59381 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.138 Process with pid 59363 is not found 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59363 ']' 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59363 00:05:08.138 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59363) - No such process 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59363 is not found' 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59381 ]] 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59381 00:05:08.138 Process with pid 59381 is not found 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59381 ']' 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59381 00:05:08.138 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59381) - No such process 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59381 is not found' 00:05:08.138 01:25:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.138 ************************************ 00:05:08.138 END TEST cpu_locks 00:05:08.138 ************************************ 00:05:08.138 00:05:08.138 real 0m28.707s 00:05:08.138 user 0m49.549s 00:05:08.138 sys 0m4.369s 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.138 01:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.138 ************************************ 00:05:08.138 END TEST event 00:05:08.138 ************************************ 00:05:08.138 00:05:08.138 real 0m53.694s 00:05:08.138 user 1m39.943s 00:05:08.138 sys 0m7.189s 00:05:08.138 01:25:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.138 01:25:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.423 01:25:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:08.423 01:25:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.423 01:25:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.423 01:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.423 ************************************ 00:05:08.423 START TEST thread 00:05:08.423 ************************************ 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:08.423 * Looking for test storage... 00:05:08.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.423 01:25:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.423 01:25:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.423 01:25:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.423 01:25:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.423 01:25:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.423 01:25:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.423 01:25:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.423 01:25:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.423 01:25:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.423 01:25:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.423 01:25:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.423 01:25:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:08.423 01:25:16 thread -- scripts/common.sh@345 -- # : 1 00:05:08.423 01:25:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.423 01:25:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.423 01:25:16 thread -- scripts/common.sh@365 -- # decimal 1 00:05:08.423 01:25:16 thread -- scripts/common.sh@353 -- # local d=1 00:05:08.423 01:25:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.423 01:25:16 thread -- scripts/common.sh@355 -- # echo 1 00:05:08.423 01:25:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.423 01:25:16 thread -- scripts/common.sh@366 -- # decimal 2 00:05:08.423 01:25:16 thread -- scripts/common.sh@353 -- # local d=2 00:05:08.423 01:25:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.423 01:25:16 thread -- scripts/common.sh@355 -- # echo 2 00:05:08.423 01:25:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.423 01:25:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.423 01:25:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.423 01:25:16 thread -- scripts/common.sh@368 -- # return 0 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.423 --rc genhtml_branch_coverage=1 00:05:08.423 --rc genhtml_function_coverage=1 00:05:08.423 --rc genhtml_legend=1 00:05:08.423 --rc geninfo_all_blocks=1 00:05:08.423 --rc geninfo_unexecuted_blocks=1 00:05:08.423 00:05:08.423 ' 00:05:08.423 01:25:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.424 --rc genhtml_branch_coverage=1 00:05:08.424 --rc genhtml_function_coverage=1 00:05:08.424 --rc genhtml_legend=1 00:05:08.424 --rc geninfo_all_blocks=1 00:05:08.424 --rc geninfo_unexecuted_blocks=1 00:05:08.424 00:05:08.424 ' 00:05:08.424 01:25:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.424 --rc genhtml_branch_coverage=1 00:05:08.424 --rc genhtml_function_coverage=1 00:05:08.424 --rc genhtml_legend=1 00:05:08.424 --rc geninfo_all_blocks=1 00:05:08.424 --rc geninfo_unexecuted_blocks=1 00:05:08.424 00:05:08.424 ' 00:05:08.424 01:25:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.424 --rc genhtml_branch_coverage=1 00:05:08.424 --rc genhtml_function_coverage=1 00:05:08.424 --rc genhtml_legend=1 00:05:08.424 --rc geninfo_all_blocks=1 00:05:08.424 --rc geninfo_unexecuted_blocks=1 00:05:08.424 00:05:08.424 ' 00:05:08.424 01:25:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.424 01:25:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:08.424 01:25:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.424 01:25:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.424 ************************************ 00:05:08.424 START TEST thread_poller_perf 00:05:08.424 ************************************ 00:05:08.424 01:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.424 [2024-11-17 01:25:16.758989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:08.424 [2024-11-17 01:25:16.759152] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:05:08.685 [2024-11-17 01:25:16.909143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.685 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:08.685 [2024-11-17 01:25:16.988927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.068 [2024-11-17T01:25:18.527Z] ====================================== 00:05:10.068 [2024-11-17T01:25:18.527Z] busy:2612494758 (cyc) 00:05:10.068 [2024-11-17T01:25:18.527Z] total_run_count: 397000 00:05:10.068 [2024-11-17T01:25:18.527Z] tsc_hz: 2600000000 (cyc) 00:05:10.068 [2024-11-17T01:25:18.527Z] ====================================== 00:05:10.068 [2024-11-17T01:25:18.527Z] poller_cost: 6580 (cyc), 2530 (nsec) 00:05:10.068 00:05:10.068 real 0m1.378s 00:05:10.068 user 0m1.208s 00:05:10.068 sys 0m0.064s 00:05:10.068 01:25:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.068 01:25:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.068 ************************************ 00:05:10.068 END TEST thread_poller_perf 00:05:10.068 ************************************ 00:05:10.068 01:25:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:10.068 01:25:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:10.068 01:25:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.068 01:25:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.068 ************************************ 00:05:10.068 START TEST thread_poller_perf 00:05:10.068 ************************************ 00:05:10.068 01:25:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:10.068 [2024-11-17 01:25:18.187938] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:10.068 [2024-11-17 01:25:18.188600] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59572 ] 00:05:10.068 [2024-11-17 01:25:18.357343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.068 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.068 [2024-11-17 01:25:18.433504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.447 [2024-11-17T01:25:19.906Z] ====================================== 00:05:11.447 [2024-11-17T01:25:19.906Z] busy:2602589298 (cyc) 00:05:11.447 [2024-11-17T01:25:19.906Z] total_run_count: 5264000 00:05:11.447 [2024-11-17T01:25:19.906Z] tsc_hz: 2600000000 (cyc) 00:05:11.447 [2024-11-17T01:25:19.906Z] ====================================== 00:05:11.447 [2024-11-17T01:25:19.906Z] poller_cost: 494 (cyc), 190 (nsec) 00:05:11.447 00:05:11.447 real 0m1.401s 00:05:11.447 user 0m1.228s 00:05:11.447 sys 0m0.065s 00:05:11.448 01:25:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.448 ************************************ 00:05:11.448 END TEST thread_poller_perf 00:05:11.448 ************************************ 00:05:11.448 01:25:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.448 01:25:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:11.448 ************************************ 00:05:11.448 END TEST thread 00:05:11.448 ************************************ 00:05:11.448 00:05:11.448 real 0m3.008s 00:05:11.448 user 0m2.537s 00:05:11.448 sys 0m0.242s 00:05:11.448 01:25:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.448 01:25:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.448 01:25:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:11.448 01:25:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:11.448 01:25:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.448 01:25:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.448 01:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.448 ************************************ 00:05:11.448 START TEST app_cmdline 00:05:11.448 ************************************ 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:11.448 * Looking for test storage... 00:05:11.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.448 01:25:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.448 --rc genhtml_branch_coverage=1 00:05:11.448 --rc genhtml_function_coverage=1 00:05:11.448 --rc genhtml_legend=1 00:05:11.448 --rc geninfo_all_blocks=1 00:05:11.448 --rc geninfo_unexecuted_blocks=1 00:05:11.448 00:05:11.448 ' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.448 --rc genhtml_branch_coverage=1 00:05:11.448 --rc genhtml_function_coverage=1 00:05:11.448 --rc genhtml_legend=1 00:05:11.448 --rc geninfo_all_blocks=1 00:05:11.448 --rc geninfo_unexecuted_blocks=1 00:05:11.448 00:05:11.448 ' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.448 --rc genhtml_branch_coverage=1 00:05:11.448 --rc genhtml_function_coverage=1 00:05:11.448 --rc genhtml_legend=1 00:05:11.448 --rc geninfo_all_blocks=1 00:05:11.448 --rc geninfo_unexecuted_blocks=1 00:05:11.448 00:05:11.448 ' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.448 --rc genhtml_branch_coverage=1 00:05:11.448 --rc genhtml_function_coverage=1 00:05:11.448 --rc genhtml_legend=1 00:05:11.448 --rc geninfo_all_blocks=1 00:05:11.448 --rc geninfo_unexecuted_blocks=1 00:05:11.448 00:05:11.448 ' 00:05:11.448 01:25:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:11.448 01:25:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59656 00:05:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.448 01:25:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59656 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59656 ']' 00:05:11.448 01:25:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.448 01:25:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.448 [2024-11-17 01:25:19.849918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:11.448 [2024-11-17 01:25:19.850163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59656 ] 00:05:11.707 [2024-11-17 01:25:20.001727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.707 [2024-11-17 01:25:20.085333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.272 01:25:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.272 01:25:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:12.272 01:25:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:12.530 { 00:05:12.530 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:05:12.530 "fields": { 00:05:12.530 "major": 25, 00:05:12.530 "minor": 1, 00:05:12.530 "patch": 0, 00:05:12.530 "suffix": "-pre", 00:05:12.530 "commit": "83e8405e4" 00:05:12.530 } 00:05:12.530 } 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:12.530 01:25:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.530 01:25:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.531 01:25:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:12.531 01:25:20 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.789 request: 00:05:12.789 { 00:05:12.789 "method": "env_dpdk_get_mem_stats", 00:05:12.789 "req_id": 1 00:05:12.789 } 00:05:12.789 Got JSON-RPC error response 00:05:12.789 response: 00:05:12.789 { 00:05:12.789 "code": -32601, 00:05:12.789 "message": "Method not found" 00:05:12.789 } 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.789 01:25:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59656 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59656 ']' 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59656 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59656 00:05:12.789 killing process with pid 59656 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59656' 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 59656 00:05:12.789 01:25:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 59656 00:05:14.163 00:05:14.163 real 0m2.668s 00:05:14.163 user 0m3.006s 00:05:14.163 sys 0m0.391s 00:05:14.163 ************************************ 00:05:14.163 END TEST app_cmdline 00:05:14.163 ************************************ 00:05:14.163 01:25:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.163 01:25:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:14.163 01:25:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:14.163 01:25:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.163 01:25:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.163 01:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.163 ************************************ 00:05:14.163 START TEST version 00:05:14.163 ************************************ 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:14.163 * Looking for test storage... 00:05:14.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.163 01:25:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.163 01:25:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.163 01:25:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.163 01:25:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.163 01:25:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.163 01:25:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.163 01:25:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.163 01:25:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.163 01:25:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.163 01:25:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.163 01:25:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.163 01:25:22 version -- scripts/common.sh@344 -- # case "$op" in 00:05:14.163 01:25:22 version -- scripts/common.sh@345 -- # : 1 00:05:14.163 01:25:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.163 01:25:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.163 01:25:22 version -- scripts/common.sh@365 -- # decimal 1 00:05:14.163 01:25:22 version -- scripts/common.sh@353 -- # local d=1 00:05:14.163 01:25:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.163 01:25:22 version -- scripts/common.sh@355 -- # echo 1 00:05:14.163 01:25:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.163 01:25:22 version -- scripts/common.sh@366 -- # decimal 2 00:05:14.163 01:25:22 version -- scripts/common.sh@353 -- # local d=2 00:05:14.163 01:25:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.163 01:25:22 version -- scripts/common.sh@355 -- # echo 2 00:05:14.163 01:25:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.163 01:25:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.163 01:25:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.163 01:25:22 version -- scripts/common.sh@368 -- # return 0 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.163 --rc genhtml_branch_coverage=1 00:05:14.163 --rc genhtml_function_coverage=1 00:05:14.163 --rc genhtml_legend=1 00:05:14.163 --rc geninfo_all_blocks=1 00:05:14.163 --rc geninfo_unexecuted_blocks=1 00:05:14.163 00:05:14.163 ' 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.163 --rc genhtml_branch_coverage=1 00:05:14.163 --rc genhtml_function_coverage=1 00:05:14.163 --rc genhtml_legend=1 00:05:14.163 --rc geninfo_all_blocks=1 00:05:14.163 --rc geninfo_unexecuted_blocks=1 00:05:14.163 00:05:14.163 ' 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.163 --rc genhtml_branch_coverage=1 00:05:14.163 --rc genhtml_function_coverage=1 00:05:14.163 --rc genhtml_legend=1 00:05:14.163 --rc geninfo_all_blocks=1 00:05:14.163 --rc geninfo_unexecuted_blocks=1 00:05:14.163 00:05:14.163 ' 00:05:14.163 01:25:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.163 --rc genhtml_branch_coverage=1 00:05:14.163 --rc genhtml_function_coverage=1 00:05:14.163 --rc genhtml_legend=1 00:05:14.163 --rc geninfo_all_blocks=1 00:05:14.163 --rc geninfo_unexecuted_blocks=1 00:05:14.163 00:05:14.163 ' 00:05:14.163 01:25:22 version -- app/version.sh@17 -- # get_header_version major 00:05:14.163 01:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # cut -f2 00:05:14.163 01:25:22 version -- app/version.sh@17 -- # major=25 00:05:14.163 01:25:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:14.163 01:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # cut -f2 00:05:14.163 01:25:22 version -- app/version.sh@18 -- # minor=1 00:05:14.163 01:25:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:14.163 01:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # cut -f2 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.163 01:25:22 version -- app/version.sh@19 -- # patch=0 00:05:14.163 01:25:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:14.163 01:25:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.163 01:25:22 version -- app/version.sh@14 -- # cut -f2 00:05:14.163 01:25:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:14.163 01:25:22 version -- app/version.sh@22 -- # version=25.1 00:05:14.163 01:25:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:14.164 01:25:22 version -- app/version.sh@28 -- # version=25.1rc0 00:05:14.164 01:25:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:14.164 01:25:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:14.164 01:25:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:14.164 01:25:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:14.164 00:05:14.164 real 0m0.204s 00:05:14.164 user 0m0.138s 00:05:14.164 sys 0m0.093s 00:05:14.164 01:25:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.164 01:25:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:14.164 ************************************ 00:05:14.164 END TEST version 00:05:14.164 ************************************ 00:05:14.164 01:25:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:14.164 01:25:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:14.164 01:25:22 -- spdk/autotest.sh@194 -- # uname -s 00:05:14.164 01:25:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:14.164 01:25:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:14.164 01:25:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:14.164 01:25:22 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:14.164 01:25:22 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:14.164 01:25:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:14.164 01:25:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.164 01:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.164 ************************************ 00:05:14.164 START TEST blockdev_nvme 00:05:14.164 ************************************ 00:05:14.164 01:25:22 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:14.423 * Looking for test storage... 00:05:14.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.423 01:25:22 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.423 --rc genhtml_branch_coverage=1 00:05:14.423 --rc genhtml_function_coverage=1 00:05:14.423 --rc genhtml_legend=1 00:05:14.423 --rc geninfo_all_blocks=1 00:05:14.423 --rc geninfo_unexecuted_blocks=1 00:05:14.423 00:05:14.423 ' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.423 --rc genhtml_branch_coverage=1 00:05:14.423 --rc genhtml_function_coverage=1 00:05:14.423 --rc genhtml_legend=1 00:05:14.423 --rc geninfo_all_blocks=1 00:05:14.423 --rc geninfo_unexecuted_blocks=1 00:05:14.423 00:05:14.423 ' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.423 --rc genhtml_branch_coverage=1 00:05:14.423 --rc genhtml_function_coverage=1 00:05:14.423 --rc genhtml_legend=1 00:05:14.423 --rc geninfo_all_blocks=1 00:05:14.423 --rc geninfo_unexecuted_blocks=1 00:05:14.423 00:05:14.423 ' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.423 --rc genhtml_branch_coverage=1 00:05:14.423 --rc genhtml_function_coverage=1 00:05:14.423 --rc genhtml_legend=1 00:05:14.423 --rc geninfo_all_blocks=1 00:05:14.423 --rc geninfo_unexecuted_blocks=1 00:05:14.423 00:05:14.423 ' 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:14.423 01:25:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:14.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59822 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:14.423 01:25:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59822 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59822 ']' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.423 01:25:22 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.424 01:25:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:14.424 01:25:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:14.424 [2024-11-17 01:25:22.826001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:14.424 [2024-11-17 01:25:22.826124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:05:14.682 [2024-11-17 01:25:22.981683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.682 [2024-11-17 01:25:23.059308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.616 01:25:23 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.616 01:25:23 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:15.616 01:25:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:15.616 01:25:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.616 01:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.616 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.616 01:25:24 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:15.616 01:25:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.616 01:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.616 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.616 01:25:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 01:25:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:15.875 01:25:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:15.876 01:25:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a86349c1-27a6-4ba9-a74e-e9c8c6a0ed2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a86349c1-27a6-4ba9-a74e-e9c8c6a0ed2c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3565ebab-83ab-4166-9145-48f7db4e5d3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3565ebab-83ab-4166-9145-48f7db4e5d3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fef40629-d0df-48e0-96ba-7daae5d34699"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fef40629-d0df-48e0-96ba-7daae5d34699",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f8c7e2c1-4d1d-47ca-8143-27c9803367bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f8c7e2c1-4d1d-47ca-8143-27c9803367bf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "56669b91-ba80-4b8b-b7fd-fcb0821ba358"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56669b91-ba80-4b8b-b7fd-fcb0821ba358",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1948fbb4-d4e3-4af7-a7d9-adbed9d1218d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1948fbb4-d4e3-4af7-a7d9-adbed9d1218d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:15.876 01:25:24 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:15.876 01:25:24 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:15.876 01:25:24 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:15.876 01:25:24 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59822 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59822 ']' 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59822 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59822 00:05:15.876 killing process with pid 59822 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59822' 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59822 00:05:15.876 01:25:24 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59822 00:05:17.314 01:25:25 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:17.314 01:25:25 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:17.314 01:25:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:17.314 01:25:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.314 01:25:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:17.314 ************************************ 00:05:17.314 START TEST bdev_hello_world 00:05:17.314 ************************************ 00:05:17.314 01:25:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:17.314 [2024-11-17 01:25:25.714223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:17.314 [2024-11-17 01:25:25.714341] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:05:17.572 [2024-11-17 01:25:25.866746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.572 [2024-11-17 01:25:25.961743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.139 [2024-11-17 01:25:26.492706] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:18.139 [2024-11-17 01:25:26.492750] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:18.139 [2024-11-17 01:25:26.492768] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:18.139 [2024-11-17 01:25:26.495246] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:18.139 [2024-11-17 01:25:26.495753] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:18.139 [2024-11-17 01:25:26.495783] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:18.139 [2024-11-17 01:25:26.496009] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:18.139 00:05:18.139 [2024-11-17 01:25:26.496028] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:19.073 ************************************ 00:05:19.073 END TEST bdev_hello_world 00:05:19.073 ************************************ 00:05:19.073 00:05:19.073 real 0m1.543s 00:05:19.073 user 0m1.269s 00:05:19.073 sys 0m0.168s 00:05:19.073 01:25:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.073 01:25:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:19.073 01:25:27 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:19.073 01:25:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.073 01:25:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.073 01:25:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:19.073 ************************************ 00:05:19.073 START TEST bdev_bounds 00:05:19.073 ************************************ 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59943 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.073 Process bdevio pid: 59943 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59943' 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59943 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59943 ']' 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.073 01:25:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:19.073 [2024-11-17 01:25:27.302522] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:19.073 [2024-11-17 01:25:27.302645] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:05:19.073 [2024-11-17 01:25:27.458384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.332 [2024-11-17 01:25:27.540666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.332 [2024-11-17 01:25:27.540967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.332 [2024-11-17 01:25:27.540979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.899 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.899 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:19.899 01:25:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:19.899 I/O targets: 00:05:19.899 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:19.899 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:19.899 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:19.899 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:19.899 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:19.899 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:19.899 00:05:19.899 00:05:19.899 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.899 http://cunit.sourceforge.net/ 00:05:19.899 00:05:19.899 00:05:19.899 Suite: bdevio tests on: Nvme3n1 00:05:19.899 Test: blockdev write read block ...passed 00:05:19.899 Test: blockdev write zeroes read block ...passed 00:05:19.899 Test: blockdev write zeroes read no split ...passed 00:05:19.899 Test: blockdev write zeroes read split ...passed 00:05:19.899 Test: blockdev write zeroes read split partial ...passed 00:05:19.899 Test: blockdev reset ...[2024-11-17 01:25:28.218921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:19.899 passed 00:05:19.899 Test: blockdev write read 8 blocks ...[2024-11-17 01:25:28.221761] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:19.899 passed 00:05:19.899 Test: blockdev write read size > 128k ...passed 00:05:19.899 Test: blockdev write read invalid size ...passed 00:05:19.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:19.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:19.899 Test: blockdev write read max offset ...passed 00:05:19.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:19.899 Test: blockdev writev readv 8 blocks ...passed 00:05:19.899 Test: blockdev writev readv 30 x 1block ...passed 00:05:19.899 Test: blockdev writev readv block ...passed 00:05:19.899 Test: blockdev writev readv size > 128k ...passed 00:05:19.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:19.899 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.229196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3c0a000 len:0x1000 00:05:19.899 [2024-11-17 01:25:28.229240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:19.899 passed 00:05:19.899 Test: blockdev nvme passthru rw ...passed 00:05:19.899 Test: blockdev nvme passthru vendor specific ...passed 00:05:19.899 Test: blockdev nvme admin passthru ...[2024-11-17 01:25:28.229865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:19.899 [2024-11-17 01:25:28.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:19.899 passed 00:05:19.899 Test: blockdev copy ...passed 00:05:19.899 Suite: bdevio tests on: Nvme2n3 00:05:19.899 Test: blockdev write read block ...passed 00:05:19.899 Test: blockdev write zeroes read block ...passed 00:05:19.899 Test: blockdev write zeroes read no split ...passed 00:05:19.899 Test: blockdev write zeroes read split ...passed 00:05:19.899 Test: blockdev write zeroes read split partial ...passed 00:05:19.899 Test: blockdev reset ...[2024-11-17 01:25:28.289381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:19.899 [2024-11-17 01:25:28.293975] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:19.899 passed 00:05:19.899 Test: blockdev write read 8 blocks ...passed 00:05:19.899 Test: blockdev write read size > 128k ...passed 00:05:19.899 Test: blockdev write read invalid size ...passed 00:05:19.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:19.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:19.899 Test: blockdev write read max offset ...passed 00:05:19.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:19.899 Test: blockdev writev readv 8 blocks ...passed 00:05:19.899 Test: blockdev writev readv 30 x 1block ...passed 00:05:19.899 Test: blockdev writev readv block ...passed 00:05:19.899 Test: blockdev writev readv size > 128k ...passed 00:05:19.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:19.899 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297606000 len:0x1000 00:05:19.899 [2024-11-17 01:25:28.302177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:19.899 passed 00:05:19.899 Test: blockdev nvme passthru rw ...passed 00:05:19.899 Test: blockdev nvme passthru vendor specific ...passed 00:05:19.899 Test: blockdev nvme admin passthru ...[2024-11-17 01:25:28.302960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:19.899 [2024-11-17 01:25:28.302989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:19.899 passed 00:05:19.899 Test: blockdev copy ...passed 00:05:19.899 Suite: bdevio tests on: Nvme2n2 00:05:19.899 Test: blockdev write read block ...passed 00:05:19.899 Test: blockdev write zeroes read block ...passed 00:05:19.899 Test: blockdev write zeroes read no split ...passed 00:05:19.899 Test: blockdev write zeroes read split ...passed 00:05:20.158 Test: blockdev write zeroes read split partial ...passed 00:05:20.158 Test: blockdev reset ...[2024-11-17 01:25:28.357884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:20.158 [2024-11-17 01:25:28.362255] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:20.158 Test: blockdev write read 8 blocks ...uccessful. 00:05:20.158 passed 00:05:20.158 Test: blockdev write read size > 128k ...passed 00:05:20.158 Test: blockdev write read invalid size ...passed 00:05:20.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.158 Test: blockdev write read max offset ...passed 00:05:20.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.158 Test: blockdev writev readv 8 blocks ...passed 00:05:20.158 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.158 Test: blockdev writev readv block ...passed 00:05:20.158 Test: blockdev writev readv size > 128k ...passed 00:05:20.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.158 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.378157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eb43c000 len:0x1000 00:05:20.158 [2024-11-17 01:25:28.378192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:20.158 passed 00:05:20.158 Test: blockdev nvme passthru rw ...passed 00:05:20.158 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:25:28.380458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:20.158 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:20.158 [2024-11-17 01:25:28.380550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:20.158 passed 00:05:20.158 Test: blockdev copy ...passed 00:05:20.158 Suite: bdevio tests on: Nvme2n1 00:05:20.158 Test: blockdev write read block ...passed 00:05:20.158 Test: blockdev write zeroes read block ...passed 00:05:20.159 Test: blockdev write zeroes read no split ...passed 00:05:20.159 Test: blockdev write zeroes read split ...passed 00:05:20.159 Test: blockdev write zeroes read split partial ...passed 00:05:20.159 Test: blockdev reset ...[2024-11-17 01:25:28.440931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:20.159 [2024-11-17 01:25:28.444612] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:20.159 Test: blockdev write read 8 blocks ...uccessful. 00:05:20.159 passed 00:05:20.159 Test: blockdev write read size > 128k ...passed 00:05:20.159 Test: blockdev write read invalid size ...passed 00:05:20.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.159 Test: blockdev write read max offset ...passed 00:05:20.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.159 Test: blockdev writev readv 8 blocks ...passed 00:05:20.159 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.159 Test: blockdev writev readv block ...passed 00:05:20.159 Test: blockdev writev readv size > 128k ...passed 00:05:20.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.159 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.461524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eb438000 len:0x1000 00:05:20.159 [2024-11-17 01:25:28.461645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:20.159 passed 00:05:20.159 Test: blockdev nvme passthru rw ...passed 00:05:20.159 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:25:28.463266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:20.159 [2024-11-17 01:25:28.463326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:20.159 passed 00:05:20.159 Test: blockdev nvme admin passthru ...passed 00:05:20.159 Test: blockdev copy ...passed 00:05:20.159 Suite: bdevio tests on: Nvme1n1 00:05:20.159 Test: blockdev write read block ...passed 00:05:20.159 Test: blockdev write zeroes read block ...passed 00:05:20.159 Test: blockdev write zeroes read no split ...passed 00:05:20.159 Test: blockdev write zeroes read split ...passed 00:05:20.159 Test: blockdev write zeroes read split partial ...passed 00:05:20.159 Test: blockdev reset ...[2024-11-17 01:25:28.515435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:20.159 [2024-11-17 01:25:28.519745] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:20.159 Test: blockdev write read 8 blocks ...uccessful. 00:05:20.159 passed 00:05:20.159 Test: blockdev write read size > 128k ...passed 00:05:20.159 Test: blockdev write read invalid size ...passed 00:05:20.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.159 Test: blockdev write read max offset ...passed 00:05:20.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.159 Test: blockdev writev readv 8 blocks ...passed 00:05:20.159 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.159 Test: blockdev writev readv block ...passed 00:05:20.159 Test: blockdev writev readv size > 128k ...passed 00:05:20.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.159 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.532058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eb434000 len:0x1000 00:05:20.159 [2024-11-17 01:25:28.532094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:20.159 passed 00:05:20.159 Test: blockdev nvme passthru rw ...passed 00:05:20.159 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.159 Test: blockdev nvme admin passthru ...[2024-11-17 01:25:28.532836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:20.159 [2024-11-17 01:25:28.532861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:20.159 passed 00:05:20.159 Test: blockdev copy ...passed 00:05:20.159 Suite: bdevio tests on: Nvme0n1 00:05:20.159 Test: blockdev write read block ...passed 00:05:20.159 Test: blockdev write zeroes read block ...passed 00:05:20.159 Test: blockdev write zeroes read no split ...passed 00:05:20.159 Test: blockdev write zeroes read split ...passed 00:05:20.159 Test: blockdev write zeroes read split partial ...passed 00:05:20.159 Test: blockdev reset ...[2024-11-17 01:25:28.582530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:20.159 [2024-11-17 01:25:28.586006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:05:20.159 Test: blockdev write read 8 blocks ...uccessful. 00:05:20.159 passed 00:05:20.159 Test: blockdev write read size > 128k ...passed 00:05:20.159 Test: blockdev write read invalid size ...passed 00:05:20.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.159 Test: blockdev write read max offset ...passed 00:05:20.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.159 Test: blockdev writev readv 8 blocks ...passed 00:05:20.159 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.159 Test: blockdev writev readv block ...passed 00:05:20.159 Test: blockdev writev readv size > 128k ...passed 00:05:20.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.159 Test: blockdev comparev and writev ...[2024-11-17 01:25:28.594652] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:20.159 separate metadata which is not supported yet. 00:05:20.159 passed 00:05:20.159 Test: blockdev nvme passthru rw ...passed 00:05:20.159 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:25:28.595339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:20.159 [2024-11-17 01:25:28.595409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:20.159 passed 00:05:20.159 Test: blockdev nvme admin passthru ...passed 00:05:20.159 Test: blockdev copy ...passed 00:05:20.159 00:05:20.159 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.159 suites 6 6 n/a 0 0 00:05:20.159 tests 138 138 138 0 0 00:05:20.159 asserts 893 893 893 0 n/a 00:05:20.159 00:05:20.159 Elapsed time = 1.096 seconds 00:05:20.159 0 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59943 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59943 ']' 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59943 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59943 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59943' 00:05:20.418 killing process with pid 59943 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59943 00:05:20.418 01:25:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59943 00:05:20.988 01:25:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:20.988 00:05:20.988 real 0m2.056s 00:05:20.988 user 0m5.180s 00:05:20.988 sys 0m0.260s 00:05:20.988 01:25:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.988 01:25:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:20.988 ************************************ 00:05:20.988 END TEST bdev_bounds 00:05:20.988 ************************************ 00:05:20.988 01:25:29 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:20.988 01:25:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:20.988 01:25:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.988 01:25:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.988 ************************************ 00:05:20.988 START TEST bdev_nbd 00:05:20.988 ************************************ 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59997 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59997 /var/tmp/spdk-nbd.sock 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59997 ']' 00:05:20.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.988 01:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:20.988 [2024-11-17 01:25:29.422025] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:20.988 [2024-11-17 01:25:29.422130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:21.248 [2024-11-17 01:25:29.579338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.248 [2024-11-17 01:25:29.676263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:22.183 1+0 records in 00:05:22.183 1+0 records out 00:05:22.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879424 s, 4.7 MB/s 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.183 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:22.442 1+0 records in 00:05:22.442 1+0 records out 00:05:22.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00241079 s, 1.7 MB/s 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.442 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.701 01:25:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:22.701 1+0 records in 00:05:22.701 1+0 records out 00:05:22.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517467 s, 7.9 MB/s 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.701 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:22.959 1+0 records in 00:05:22.959 1+0 records out 00:05:22.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340928 s, 12.0 MB/s 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.959 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:22.960 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:22.960 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.960 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.219 1+0 records in 00:05:23.219 1+0 records out 00:05:23.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761224 s, 5.4 MB/s 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.219 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.478 1+0 records in 00:05:23.478 1+0 records out 00:05:23.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111421 s, 3.7 MB/s 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd0", 00:05:23.478 "bdev_name": "Nvme0n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd1", 00:05:23.478 "bdev_name": "Nvme1n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd2", 00:05:23.478 "bdev_name": "Nvme2n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd3", 00:05:23.478 "bdev_name": "Nvme2n2" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd4", 00:05:23.478 "bdev_name": "Nvme2n3" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd5", 00:05:23.478 "bdev_name": "Nvme3n1" 00:05:23.478 } 00:05:23.478 ]' 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd0", 00:05:23.478 "bdev_name": "Nvme0n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd1", 00:05:23.478 "bdev_name": "Nvme1n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd2", 00:05:23.478 "bdev_name": "Nvme2n1" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd3", 00:05:23.478 "bdev_name": "Nvme2n2" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd4", 00:05:23.478 "bdev_name": "Nvme2n3" 00:05:23.478 }, 00:05:23.478 { 00:05:23.478 "nbd_device": "/dev/nbd5", 00:05:23.478 "bdev_name": "Nvme3n1" 00:05:23.478 } 00:05:23.478 ]' 00:05:23.478 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.737 01:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.737 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.997 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.256 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.515 01:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.774 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:25.033 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:25.292 /dev/nbd0 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:25.292 1+0 records in 00:05:25.292 1+0 records out 00:05:25.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378059 s, 10.8 MB/s 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:25.292 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:25.551 /dev/nbd1 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:25.551 1+0 records in 00:05:25.551 1+0 records out 00:05:25.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724991 s, 5.6 MB/s 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:25.551 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.552 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:25.552 01:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:25.810 /dev/nbd10 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:25.810 1+0 records in 00:05:25.810 1+0 records out 00:05:25.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853155 s, 4.8 MB/s 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:25.810 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:26.069 /dev/nbd11 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.069 1+0 records in 00:05:26.069 1+0 records out 00:05:26.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103048 s, 4.0 MB/s 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.069 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:26.328 /dev/nbd12 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.328 1+0 records in 00:05:26.328 1+0 records out 00:05:26.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489283 s, 8.4 MB/s 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.328 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:26.587 /dev/nbd13 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.587 1+0 records in 00:05:26.587 1+0 records out 00:05:26.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837523 s, 4.9 MB/s 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.587 01:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd0", 00:05:26.845 "bdev_name": "Nvme0n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd1", 00:05:26.845 "bdev_name": "Nvme1n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd10", 00:05:26.845 "bdev_name": "Nvme2n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd11", 00:05:26.845 "bdev_name": "Nvme2n2" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd12", 00:05:26.845 "bdev_name": "Nvme2n3" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd13", 00:05:26.845 "bdev_name": "Nvme3n1" 00:05:26.845 } 00:05:26.845 ]' 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd0", 00:05:26.845 "bdev_name": "Nvme0n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd1", 00:05:26.845 "bdev_name": "Nvme1n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd10", 00:05:26.845 "bdev_name": "Nvme2n1" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd11", 00:05:26.845 "bdev_name": "Nvme2n2" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd12", 00:05:26.845 "bdev_name": "Nvme2n3" 00:05:26.845 }, 00:05:26.845 { 00:05:26.845 "nbd_device": "/dev/nbd13", 00:05:26.845 "bdev_name": "Nvme3n1" 00:05:26.845 } 00:05:26.845 ]' 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.845 /dev/nbd1 00:05:26.845 /dev/nbd10 00:05:26.845 /dev/nbd11 00:05:26.845 /dev/nbd12 00:05:26.845 /dev/nbd13' 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.845 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.845 /dev/nbd1 00:05:26.845 /dev/nbd10 00:05:26.845 /dev/nbd11 00:05:26.845 /dev/nbd12 00:05:26.846 /dev/nbd13' 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:26.846 256+0 records in 00:05:26.846 256+0 records out 00:05:26.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0089187 s, 118 MB/s 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.846 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.104 256+0 records in 00:05:27.104 256+0 records out 00:05:27.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198215 s, 5.3 MB/s 00:05:27.104 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.104 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.104 256+0 records in 00:05:27.104 256+0 records out 00:05:27.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177715 s, 5.9 MB/s 00:05:27.104 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.104 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:27.362 256+0 records in 00:05:27.362 256+0 records out 00:05:27.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209837 s, 5.0 MB/s 00:05:27.362 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.362 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:27.621 256+0 records in 00:05:27.621 256+0 records out 00:05:27.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105886 s, 9.9 MB/s 00:05:27.621 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.621 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:27.621 256+0 records in 00:05:27.621 256+0 records out 00:05:27.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0750866 s, 14.0 MB/s 00:05:27.621 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.621 01:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:27.621 256+0 records in 00:05:27.621 256+0 records out 00:05:27.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147268 s, 7.1 MB/s 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.621 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.880 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.139 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:28.397 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.398 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.656 01:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.916 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.176 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:29.434 malloc_lvol_verify 00:05:29.434 01:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:29.693 7f088a62-a306-4ea3-a147-dce7a20f7892 00:05:29.693 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:29.951 1f08eda5-1387-40de-b945-c8ebc29bcca5 00:05:29.951 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:30.211 /dev/nbd0 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:30.211 mke2fs 1.47.0 (5-Feb-2023) 00:05:30.211 Discarding device blocks: 0/4096 done 00:05:30.211 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:30.211 00:05:30.211 Allocating group tables: 0/1 done 00:05:30.211 Writing inode tables: 0/1 done 00:05:30.211 Creating journal (1024 blocks): done 00:05:30.211 Writing superblocks and filesystem accounting information: 0/1 done 00:05:30.211 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.211 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59997 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59997 ']' 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59997 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59997 00:05:30.469 killing process with pid 59997 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59997' 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59997 00:05:30.469 01:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59997 00:05:31.037 01:25:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:31.037 00:05:31.037 real 0m9.976s 00:05:31.037 user 0m13.956s 00:05:31.037 sys 0m3.273s 00:05:31.037 01:25:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.037 ************************************ 00:05:31.037 END TEST bdev_nbd 00:05:31.037 ************************************ 00:05:31.037 01:25:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:31.037 skipping fio tests on NVMe due to multi-ns failures. 00:05:31.037 01:25:39 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:31.037 01:25:39 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:31.037 01:25:39 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:31.037 01:25:39 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:31.037 01:25:39 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:31.037 01:25:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:31.037 01:25:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.037 01:25:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:31.037 ************************************ 00:05:31.037 START TEST bdev_verify 00:05:31.037 ************************************ 00:05:31.037 01:25:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:31.037 [2024-11-17 01:25:39.446647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.037 [2024-11-17 01:25:39.446744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:05:31.294 [2024-11-17 01:25:39.591330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.294 [2024-11-17 01:25:39.673081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.295 [2024-11-17 01:25:39.673195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.859 Running I/O for 5 seconds... 00:05:34.225 21504.00 IOPS, 84.00 MiB/s [2024-11-17T01:25:43.618Z] 21504.00 IOPS, 84.00 MiB/s [2024-11-17T01:25:44.552Z] 21589.33 IOPS, 84.33 MiB/s [2024-11-17T01:25:45.489Z] 21424.00 IOPS, 83.69 MiB/s [2024-11-17T01:25:45.489Z] 21376.00 IOPS, 83.50 MiB/s 00:05:37.030 Latency(us) 00:05:37.030 [2024-11-17T01:25:45.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:37.030 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0xbd0bd 00:05:37.030 Nvme0n1 : 5.04 1776.59 6.94 0.00 0.00 71725.45 11241.94 73400.32 00:05:37.030 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:37.030 Nvme0n1 : 5.04 1726.24 6.74 0.00 0.00 73799.44 12653.49 92758.65 00:05:37.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0xa0000 00:05:37.030 Nvme1n1 : 5.06 1781.87 6.96 0.00 0.00 71450.21 5444.53 70577.23 00:05:37.030 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0xa0000 length 0xa0000 00:05:37.030 Nvme1n1 : 5.08 1737.23 6.79 0.00 0.00 73171.30 10284.11 70980.53 00:05:37.030 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0x80000 00:05:37.030 Nvme2n1 : 5.07 1781.43 6.96 0.00 0.00 71333.66 5570.56 65334.35 00:05:37.030 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x80000 length 0x80000 00:05:37.030 Nvme2n1 : 5.09 1736.22 6.78 0.00 0.00 72976.05 11947.72 62914.56 00:05:37.030 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0x80000 00:05:37.030 Nvme2n2 : 5.07 1780.93 6.96 0.00 0.00 71219.82 6125.10 66947.54 00:05:37.030 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x80000 length 0x80000 00:05:37.030 Nvme2n2 : 5.09 1735.76 6.78 0.00 0.00 72833.88 12149.37 63721.16 00:05:37.030 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0x80000 00:05:37.030 Nvme2n3 : 5.08 1790.18 6.99 0.00 0.00 70791.13 6906.49 68157.44 00:05:37.030 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x80000 length 0x80000 00:05:37.030 Nvme2n3 : 5.09 1735.29 6.78 0.00 0.00 72635.83 12351.02 67754.14 00:05:37.030 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x0 length 0x20000 00:05:37.030 Nvme3n1 : 5.08 1789.74 6.99 0.00 0.00 70660.69 6704.84 72997.02 00:05:37.030 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:37.030 Verification LBA range: start 0x20000 length 0x20000 00:05:37.030 Nvme3n1 : 5.09 1734.84 6.78 0.00 0.00 72579.94 11846.89 70980.53 00:05:37.030 [2024-11-17T01:25:45.489Z] =================================================================================================================== 00:05:37.030 [2024-11-17T01:25:45.489Z] Total : 21106.33 82.45 0.00 0.00 72084.97 5444.53 92758.65 00:05:37.965 00:05:37.965 real 0m6.770s 00:05:37.965 user 0m12.736s 00:05:37.965 sys 0m0.193s 00:05:37.965 01:25:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.965 ************************************ 00:05:37.965 END TEST bdev_verify 00:05:37.965 ************************************ 00:05:37.965 01:25:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:37.965 01:25:46 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:37.965 01:25:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:37.965 01:25:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.965 01:25:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.965 ************************************ 00:05:37.965 START TEST bdev_verify_big_io 00:05:37.965 ************************************ 00:05:37.965 01:25:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:37.966 [2024-11-17 01:25:46.275097] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.966 [2024-11-17 01:25:46.275216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:05:38.224 [2024-11-17 01:25:46.434922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.224 [2024-11-17 01:25:46.534378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.224 [2024-11-17 01:25:46.534506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.790 Running I/O for 5 seconds... 00:05:43.974 999.00 IOPS, 62.44 MiB/s [2024-11-17T01:25:53.366Z] 1962.00 IOPS, 122.62 MiB/s [2024-11-17T01:25:53.625Z] 2334.00 IOPS, 145.88 MiB/s [2024-11-17T01:25:53.625Z] 2351.50 IOPS, 146.97 MiB/s 00:05:45.166 Latency(us) 00:05:45.166 [2024-11-17T01:25:53.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:45.166 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0xbd0b 00:05:45.166 Nvme0n1 : 5.70 92.60 5.79 0.00 0.00 1277715.79 37506.76 1626099.40 00:05:45.166 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:45.166 Nvme0n1 : 5.78 132.91 8.31 0.00 0.00 931795.63 38313.35 987274.63 00:05:45.166 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0xa000 00:05:45.166 Nvme1n1 : 5.96 104.57 6.54 0.00 0.00 1112723.44 79449.80 1329271.73 00:05:45.166 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0xa000 length 0xa000 00:05:45.166 Nvme1n1 : 5.88 131.22 8.20 0.00 0.00 900870.55 108890.58 896935.78 00:05:45.166 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0x8000 00:05:45.166 Nvme2n1 : 5.96 107.32 6.71 0.00 0.00 1032061.56 70173.93 1116330.14 00:05:45.166 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x8000 length 0x8000 00:05:45.166 Nvme2n1 : 5.92 133.99 8.37 0.00 0.00 862760.18 102841.11 758201.11 00:05:45.166 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0x8000 00:05:45.166 Nvme2n2 : 6.07 122.89 7.68 0.00 0.00 864126.29 20467.40 1374441.16 00:05:45.166 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x8000 length 0x8000 00:05:45.166 Nvme2n2 : 5.89 126.40 7.90 0.00 0.00 886228.31 101227.91 1458327.24 00:05:45.166 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0x8000 00:05:45.166 Nvme2n3 : 6.16 149.48 9.34 0.00 0.00 682733.99 14115.45 1413157.81 00:05:45.166 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x8000 length 0x8000 00:05:45.166 Nvme2n3 : 5.93 138.76 8.67 0.00 0.00 799009.35 5318.50 1497043.89 00:05:45.166 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x0 length 0x2000 00:05:45.166 Nvme3n1 : 6.38 257.27 16.08 0.00 0.00 380549.42 148.87 1451874.46 00:05:45.166 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:45.166 Verification LBA range: start 0x2000 length 0x2000 00:05:45.166 Nvme3n1 : 5.93 151.01 9.44 0.00 0.00 715623.22 5620.97 884030.23 00:05:45.166 [2024-11-17T01:25:53.625Z] =================================================================================================================== 00:05:45.166 [2024-11-17T01:25:53.625Z] Total : 1648.43 103.03 0.00 0.00 806394.03 148.87 1626099.40 00:05:47.066 00:05:47.066 real 0m8.856s 00:05:47.066 user 0m16.765s 00:05:47.066 sys 0m0.251s 00:05:47.066 ************************************ 00:05:47.066 END TEST bdev_verify_big_io 00:05:47.066 ************************************ 00:05:47.066 01:25:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.066 01:25:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:47.066 01:25:55 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:47.066 01:25:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:47.066 01:25:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.066 01:25:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:47.066 ************************************ 00:05:47.066 START TEST bdev_write_zeroes 00:05:47.066 ************************************ 00:05:47.066 01:25:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:47.066 [2024-11-17 01:25:55.169015] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:47.066 [2024-11-17 01:25:55.169135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:05:47.066 [2024-11-17 01:25:55.327443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.066 [2024-11-17 01:25:55.422524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.633 Running I/O for 1 seconds... 00:05:48.567 77568.00 IOPS, 303.00 MiB/s 00:05:48.567 Latency(us) 00:05:48.567 [2024-11-17T01:25:57.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:48.567 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme0n1 : 1.02 12839.58 50.15 0.00 0.00 9948.18 8368.44 20769.87 00:05:48.567 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme1n1 : 1.02 12824.77 50.10 0.00 0.00 9948.27 8469.27 20870.70 00:05:48.567 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme2n1 : 1.02 12810.30 50.04 0.00 0.00 9923.47 8368.44 19459.15 00:05:48.567 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme2n2 : 1.03 12795.88 49.98 0.00 0.00 9891.44 8418.86 18148.43 00:05:48.567 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme2n3 : 1.03 12781.25 49.93 0.00 0.00 9867.08 7158.55 18047.61 00:05:48.567 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:48.567 Nvme3n1 : 1.03 12766.86 49.87 0.00 0.00 9851.42 5419.32 19660.80 00:05:48.567 [2024-11-17T01:25:57.026Z] =================================================================================================================== 00:05:48.567 [2024-11-17T01:25:57.026Z] Total : 76818.64 300.07 0.00 0.00 9904.98 5419.32 20870.70 00:05:49.500 00:05:49.500 real 0m2.627s 00:05:49.500 user 0m2.341s 00:05:49.500 sys 0m0.174s 00:05:49.500 01:25:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.500 01:25:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:49.500 ************************************ 00:05:49.500 END TEST bdev_write_zeroes 00:05:49.500 ************************************ 00:05:49.500 01:25:57 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:49.500 01:25:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:49.500 01:25:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.500 01:25:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.500 ************************************ 00:05:49.500 START TEST bdev_json_nonenclosed 00:05:49.500 ************************************ 00:05:49.500 01:25:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:49.500 [2024-11-17 01:25:57.845158] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:49.500 [2024-11-17 01:25:57.845246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60637 ] 00:05:49.758 [2024-11-17 01:25:57.998358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.758 [2024-11-17 01:25:58.094087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.758 [2024-11-17 01:25:58.094163] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:49.758 [2024-11-17 01:25:58.094179] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:49.758 [2024-11-17 01:25:58.094189] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.016 00:05:50.016 real 0m0.475s 00:05:50.016 user 0m0.290s 00:05:50.016 sys 0m0.082s 00:05:50.016 ************************************ 00:05:50.016 END TEST bdev_json_nonenclosed 00:05:50.016 ************************************ 00:05:50.016 01:25:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.016 01:25:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:50.016 01:25:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.016 01:25:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:50.017 01:25:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.017 01:25:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.017 ************************************ 00:05:50.017 START TEST bdev_json_nonarray 00:05:50.017 ************************************ 00:05:50.017 01:25:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.017 [2024-11-17 01:25:58.372078] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:50.017 [2024-11-17 01:25:58.372203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:05:50.275 [2024-11-17 01:25:58.530620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.275 [2024-11-17 01:25:58.627693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.275 [2024-11-17 01:25:58.627780] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:50.275 [2024-11-17 01:25:58.627809] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:50.275 [2024-11-17 01:25:58.627818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.533 00:05:50.533 real 0m0.498s 00:05:50.533 user 0m0.297s 00:05:50.533 sys 0m0.095s 00:05:50.533 ************************************ 00:05:50.533 END TEST bdev_json_nonarray 00:05:50.533 ************************************ 00:05:50.533 01:25:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.533 01:25:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:05:50.533 01:25:58 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:05:50.533 00:05:50.533 real 0m36.269s 00:05:50.533 user 0m56.061s 00:05:50.533 sys 0m5.220s 00:05:50.533 ************************************ 00:05:50.533 01:25:58 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.533 01:25:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.533 END TEST blockdev_nvme 00:05:50.533 ************************************ 00:05:50.533 01:25:58 -- spdk/autotest.sh@209 -- # uname -s 00:05:50.533 01:25:58 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:05:50.533 01:25:58 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:50.533 01:25:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.533 01:25:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.533 01:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.533 ************************************ 00:05:50.533 START TEST blockdev_nvme_gpt 00:05:50.533 ************************************ 00:05:50.533 01:25:58 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:50.533 * Looking for test storage... 00:05:50.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:50.533 01:25:58 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.533 01:25:58 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.533 01:25:58 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.791 01:25:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.791 --rc genhtml_branch_coverage=1 00:05:50.791 --rc genhtml_function_coverage=1 00:05:50.791 --rc genhtml_legend=1 00:05:50.791 --rc geninfo_all_blocks=1 00:05:50.791 --rc geninfo_unexecuted_blocks=1 00:05:50.791 00:05:50.791 ' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.791 --rc genhtml_branch_coverage=1 00:05:50.791 --rc genhtml_function_coverage=1 00:05:50.791 --rc genhtml_legend=1 00:05:50.791 --rc geninfo_all_blocks=1 00:05:50.791 --rc geninfo_unexecuted_blocks=1 00:05:50.791 00:05:50.791 ' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.791 --rc genhtml_branch_coverage=1 00:05:50.791 --rc genhtml_function_coverage=1 00:05:50.791 --rc genhtml_legend=1 00:05:50.791 --rc geninfo_all_blocks=1 00:05:50.791 --rc geninfo_unexecuted_blocks=1 00:05:50.791 00:05:50.791 ' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.791 --rc genhtml_branch_coverage=1 00:05:50.791 --rc genhtml_function_coverage=1 00:05:50.791 --rc genhtml_legend=1 00:05:50.791 --rc geninfo_all_blocks=1 00:05:50.791 --rc geninfo_unexecuted_blocks=1 00:05:50.791 00:05:50.791 ' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60741 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60741 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60741 ']' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.791 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:05:50.791 [2024-11-17 01:25:59.126149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:50.791 [2024-11-17 01:25:59.126267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 00:05:51.049 [2024-11-17 01:25:59.283911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.049 [2024-11-17 01:25:59.378514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.612 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.612 01:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:05:51.612 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:51.612 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:05:51.612 01:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:51.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.127 Waiting for block devices as requested 00:05:52.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.127 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.385 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.385 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:57.687 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:05:57.687 BYT; 00:05:57.687 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:05:57.687 BYT; 00:05:57.687 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:57.687 01:26:05 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:57.687 01:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:05:58.624 The operation has completed successfully. 00:05:58.624 01:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:05:59.558 The operation has completed successfully. 00:05:59.558 01:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.383 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.383 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.383 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.383 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.643 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:00.643 01:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.644 01:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.644 [] 00:06:00.644 01:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.644 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:00.644 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:00.644 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:00.644 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:00.644 01:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:00.644 01:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.644 01:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.913 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.913 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:00.913 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.913 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6770ca60-873e-47cd-a9ca-3ef0d3f01ae5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6770ca60-873e-47cd-a9ca-3ef0d3f01ae5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "05006a41-15fc-4ecd-bb94-a6a0889fbeb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "05006a41-15fc-4ecd-bb94-a6a0889fbeb5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8d159eac-2b22-4b83-8a98-d18da7995313"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d159eac-2b22-4b83-8a98-d18da7995313",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "daf21e35-3434-41bf-8160-1eb06b12462f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "daf21e35-3434-41bf-8160-1eb06b12462f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "68495fe2-7d9f-4f1c-ade3-c63bccb32791"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "68495fe2-7d9f-4f1c-ade3-c63bccb32791",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:00.914 01:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60741 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60741 ']' 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60741 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.914 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60741 00:06:01.174 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.174 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.174 killing process with pid 60741 00:06:01.174 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60741' 00:06:01.174 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60741 00:06:01.174 01:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60741 00:06:02.114 01:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:02.114 01:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:02.114 01:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:02.114 01:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.114 01:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.114 ************************************ 00:06:02.114 START TEST bdev_hello_world 00:06:02.114 ************************************ 00:06:02.114 01:26:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:02.374 [2024-11-17 01:26:10.596396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:02.374 [2024-11-17 01:26:10.596513] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 00:06:02.374 [2024-11-17 01:26:10.752958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.633 [2024-11-17 01:26:10.834112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.909 [2024-11-17 01:26:11.325478] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:02.909 [2024-11-17 01:26:11.325533] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:02.909 [2024-11-17 01:26:11.325555] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:02.909 [2024-11-17 01:26:11.327960] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:02.909 [2024-11-17 01:26:11.328535] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:02.909 [2024-11-17 01:26:11.328565] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:02.909 [2024-11-17 01:26:11.328777] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:02.909 00:06:02.909 [2024-11-17 01:26:11.328816] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:03.882 00:06:03.882 real 0m1.462s 00:06:03.882 user 0m1.197s 00:06:03.882 sys 0m0.159s 00:06:03.882 01:26:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.882 ************************************ 00:06:03.882 END TEST bdev_hello_world 00:06:03.882 ************************************ 00:06:03.882 01:26:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:03.882 01:26:12 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:03.882 01:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.882 01:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.882 01:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:03.882 ************************************ 00:06:03.882 START TEST bdev_bounds 00:06:03.882 ************************************ 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61395 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.882 Process bdevio pid: 61395 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61395' 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61395 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61395 ']' 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:03.882 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:03.882 [2024-11-17 01:26:12.091615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:03.883 [2024-11-17 01:26:12.091740] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61395 ] 00:06:03.883 [2024-11-17 01:26:12.250525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.141 [2024-11-17 01:26:12.349880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.141 [2024-11-17 01:26:12.350381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.141 [2024-11-17 01:26:12.350388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.708 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.708 01:26:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:04.708 01:26:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:04.708 I/O targets: 00:06:04.708 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:04.708 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:04.708 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:04.708 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.708 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.708 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:04.708 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:04.708 00:06:04.708 00:06:04.708 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.708 http://cunit.sourceforge.net/ 00:06:04.708 00:06:04.708 00:06:04.708 Suite: bdevio tests on: Nvme3n1 00:06:04.708 Test: blockdev write read block ...passed 00:06:04.708 Test: blockdev write zeroes read block ...passed 00:06:04.708 Test: blockdev write zeroes read no split ...passed 00:06:04.708 Test: blockdev write zeroes read split ...passed 00:06:04.708 Test: blockdev write zeroes read split partial ...passed 00:06:04.708 Test: blockdev reset ...[2024-11-17 01:26:13.062163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:04.708 [2024-11-17 01:26:13.065142] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:04.708 passed 00:06:04.708 Test: blockdev write read 8 blocks ...passed 00:06:04.708 Test: blockdev write read size > 128k ...passed 00:06:04.708 Test: blockdev write read invalid size ...passed 00:06:04.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.708 Test: blockdev write read max offset ...passed 00:06:04.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.708 Test: blockdev writev readv 8 blocks ...passed 00:06:04.708 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.708 Test: blockdev writev readv block ...passed 00:06:04.708 Test: blockdev writev readv size > 128k ...passed 00:06:04.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.708 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1804000 len:0x1000 00:06:04.708 [2024-11-17 01:26:13.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.708 passed 00:06:04.708 Test: blockdev nvme passthru rw ...passed 00:06:04.708 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:26:13.073703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.708 passed 00:06:04.708 Test: blockdev nvme admin passthru ...[2024-11-17 01:26:13.073771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.708 passed 00:06:04.708 Test: blockdev copy ...passed 00:06:04.708 Suite: bdevio tests on: Nvme2n3 00:06:04.708 Test: blockdev write read block ...passed 00:06:04.708 Test: blockdev write zeroes read block ...passed 00:06:04.708 Test: blockdev write zeroes read no split ...passed 00:06:04.708 Test: blockdev write zeroes read split ...passed 00:06:04.708 Test: blockdev write zeroes read split partial ...passed 00:06:04.708 Test: blockdev reset ...[2024-11-17 01:26:13.123461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.708 [2024-11-17 01:26:13.126342] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.708 passed 00:06:04.708 Test: blockdev write read 8 blocks ...passed 00:06:04.708 Test: blockdev write read size > 128k ...passed 00:06:04.708 Test: blockdev write read invalid size ...passed 00:06:04.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.708 Test: blockdev write read max offset ...passed 00:06:04.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.709 Test: blockdev writev readv 8 blocks ...passed 00:06:04.709 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.709 Test: blockdev writev readv block ...passed 00:06:04.709 Test: blockdev writev readv size > 128k ...passed 00:06:04.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.709 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.133163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1802000 len:0x1000 00:06:04.709 [2024-11-17 01:26:13.133211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.709 passed 00:06:04.709 Test: blockdev nvme passthru rw ...passed 00:06:04.709 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.709 Test: blockdev nvme admin passthru ...[2024-11-17 01:26:13.133933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.709 [2024-11-17 01:26:13.133959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.709 passed 00:06:04.709 Test: blockdev copy ...passed 00:06:04.709 Suite: bdevio tests on: Nvme2n2 00:06:04.709 Test: blockdev write read block ...passed 00:06:04.709 Test: blockdev write zeroes read block ...passed 00:06:04.709 Test: blockdev write zeroes read no split ...passed 00:06:04.968 Test: blockdev write zeroes read split ...passed 00:06:04.968 Test: blockdev write zeroes read split partial ...passed 00:06:04.968 Test: blockdev reset ...[2024-11-17 01:26:13.183989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.968 [2024-11-17 01:26:13.186931] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.968 passed 00:06:04.968 Test: blockdev write read 8 blocks ...passed 00:06:04.968 Test: blockdev write read size > 128k ...passed 00:06:04.968 Test: blockdev write read invalid size ...passed 00:06:04.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.968 Test: blockdev write read max offset ...passed 00:06:04.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.968 Test: blockdev writev readv 8 blocks ...passed 00:06:04.968 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.968 Test: blockdev writev readv block ...passed 00:06:04.968 Test: blockdev writev readv size > 128k ...passed 00:06:04.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.968 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.193275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4a38000 len:0x1000 00:06:04.968 [2024-11-17 01:26:13.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.968 passed 00:06:04.968 Test: blockdev nvme passthru rw ...passed 00:06:04.968 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:26:13.193907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.968 [2024-11-17 01:26:13.193929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.968 passed 00:06:04.968 Test: blockdev nvme admin passthru ...passed 00:06:04.968 Test: blockdev copy ...passed 00:06:04.968 Suite: bdevio tests on: Nvme2n1 00:06:04.968 Test: blockdev write read block ...passed 00:06:04.968 Test: blockdev write zeroes read block ...passed 00:06:04.968 Test: blockdev write zeroes read no split ...passed 00:06:04.968 Test: blockdev write zeroes read split ...passed 00:06:04.968 Test: blockdev write zeroes read split partial ...passed 00:06:04.968 Test: blockdev reset ...[2024-11-17 01:26:13.243305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:04.968 [2024-11-17 01:26:13.246198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:04.968 passed 00:06:04.968 Test: blockdev write read 8 blocks ...passed 00:06:04.968 Test: blockdev write read size > 128k ...passed 00:06:04.968 Test: blockdev write read invalid size ...passed 00:06:04.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.969 Test: blockdev write read max offset ...passed 00:06:04.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.969 Test: blockdev writev readv 8 blocks ...passed 00:06:04.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.969 Test: blockdev writev readv block ...passed 00:06:04.969 Test: blockdev writev readv size > 128k ...passed 00:06:04.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.969 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.252739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4a34000 len:0x1000 00:06:04.969 [2024-11-17 01:26:13.252779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.969 passed 00:06:04.969 Test: blockdev nvme passthru rw ...passed 00:06:04.969 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.969 Test: blockdev nvme admin passthru ...[2024-11-17 01:26:13.253310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:04.969 [2024-11-17 01:26:13.253333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:04.969 passed 00:06:04.969 Test: blockdev copy ...passed 00:06:04.969 Suite: bdevio tests on: Nvme1n1p2 00:06:04.969 Test: blockdev write read block ...passed 00:06:04.969 Test: blockdev write zeroes read block ...passed 00:06:04.969 Test: blockdev write zeroes read no split ...passed 00:06:04.969 Test: blockdev write zeroes read split ...passed 00:06:04.969 Test: blockdev write zeroes read split partial ...passed 00:06:04.969 Test: blockdev reset ...[2024-11-17 01:26:13.296881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:04.969 [2024-11-17 01:26:13.299570] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:04.969 passed 00:06:04.969 Test: blockdev write read 8 blocks ...passed 00:06:04.969 Test: blockdev write read size > 128k ...passed 00:06:04.969 Test: blockdev write read invalid size ...passed 00:06:04.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.969 Test: blockdev write read max offset ...passed 00:06:04.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.969 Test: blockdev writev readv 8 blocks ...passed 00:06:04.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.969 Test: blockdev writev readv block ...passed 00:06:04.969 Test: blockdev writev readv size > 128k ...passed 00:06:04.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.969 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.305912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c4a30000 len:0x1000 00:06:04.969 passed 00:06:04.969 Test: blockdev nvme passthru rw ...passed 00:06:04.969 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.969 Test: blockdev nvme admin passthru ...passed 00:06:04.969 Test: blockdev copy ...[2024-11-17 01:26:13.305946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.969 passed 00:06:04.969 Suite: bdevio tests on: Nvme1n1p1 00:06:04.969 Test: blockdev write read block ...passed 00:06:04.969 Test: blockdev write zeroes read block ...passed 00:06:04.969 Test: blockdev write zeroes read no split ...passed 00:06:04.969 Test: blockdev write zeroes read split ...passed 00:06:04.969 Test: blockdev write zeroes read split partial ...passed 00:06:04.969 Test: blockdev reset ...[2024-11-17 01:26:13.349344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:04.969 [2024-11-17 01:26:13.351901] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:04.969 passed 00:06:04.969 Test: blockdev write read 8 blocks ...passed 00:06:04.969 Test: blockdev write read size > 128k ...passed 00:06:04.969 Test: blockdev write read invalid size ...passed 00:06:04.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.969 Test: blockdev write read max offset ...passed 00:06:04.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.969 Test: blockdev writev readv 8 blocks ...passed 00:06:04.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.969 Test: blockdev writev readv block ...passed 00:06:04.969 Test: blockdev writev readv size > 128k ...passed 00:06:04.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.969 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.359181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b220e000 len:0x1000 00:06:04.969 passed 00:06:04.969 Test: blockdev nvme passthru rw ...passed 00:06:04.969 Test: blockdev nvme passthru vendor specific ...passed 00:06:04.969 Test: blockdev nvme admin passthru ...passed 00:06:04.969 Test: blockdev copy ...[2024-11-17 01:26:13.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:04.969 passed 00:06:04.969 Suite: bdevio tests on: Nvme0n1 00:06:04.969 Test: blockdev write read block ...passed 00:06:04.969 Test: blockdev write zeroes read block ...passed 00:06:04.969 Test: blockdev write zeroes read no split ...passed 00:06:04.969 Test: blockdev write zeroes read split ...passed 00:06:04.969 Test: blockdev write zeroes read split partial ...passed 00:06:04.969 Test: blockdev reset ...[2024-11-17 01:26:13.402004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:04.969 [2024-11-17 01:26:13.404609] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:04.969 passed 00:06:04.969 Test: blockdev write read 8 blocks ...passed 00:06:04.969 Test: blockdev write read size > 128k ...passed 00:06:04.969 Test: blockdev write read invalid size ...passed 00:06:04.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.969 Test: blockdev write read max offset ...passed 00:06:04.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.969 Test: blockdev writev readv 8 blocks ...passed 00:06:04.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.969 Test: blockdev writev readv block ...passed 00:06:04.969 Test: blockdev writev readv size > 128k ...passed 00:06:04.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.969 Test: blockdev comparev and writev ...[2024-11-17 01:26:13.410268] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:04.969 separate metadata which is not supported yet. 00:06:04.969 passed 00:06:04.969 Test: blockdev nvme passthru rw ...passed 00:06:04.969 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:26:13.410861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:04.969 [2024-11-17 01:26:13.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:04.969 passed 00:06:04.969 Test: blockdev nvme admin passthru ...passed 00:06:04.969 Test: blockdev copy ...passed 00:06:04.969 00:06:04.969 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.969 suites 7 7 n/a 0 0 00:06:04.969 tests 161 161 161 0 0 00:06:04.969 asserts 1025 1025 1025 0 n/a 00:06:04.969 00:06:04.969 Elapsed time = 1.040 seconds 00:06:04.969 0 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61395 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61395 ']' 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61395 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61395 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.227 killing process with pid 61395 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61395' 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61395 00:06:05.227 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61395 00:06:05.794 01:26:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:05.794 00:06:05.794 real 0m1.943s 00:06:05.794 user 0m4.968s 00:06:05.794 sys 0m0.274s 00:06:05.794 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.795 01:26:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 ************************************ 00:06:05.795 END TEST bdev_bounds 00:06:05.795 ************************************ 00:06:05.795 01:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:05.795 01:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:05.795 01:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.795 01:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 ************************************ 00:06:05.795 START TEST bdev_nbd 00:06:05.795 ************************************ 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61449 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61449 /var/tmp/spdk-nbd.sock 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61449 ']' 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:05.795 [2024-11-17 01:26:14.086802] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.795 [2024-11-17 01:26:14.086921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.795 [2024-11-17 01:26:14.244080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.053 [2024-11-17 01:26:14.323606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.620 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:06.621 01:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.879 1+0 records in 00:06:06.879 1+0 records out 00:06:06.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299457 s, 13.7 MB/s 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:06.879 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.137 1+0 records in 00:06:07.137 1+0 records out 00:06:07.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463376 s, 8.8 MB/s 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:07.137 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.395 1+0 records in 00:06:07.395 1+0 records out 00:06:07.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458692 s, 8.9 MB/s 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.395 1+0 records in 00:06:07.395 1+0 records out 00:06:07.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334957 s, 12.2 MB/s 00:06:07.395 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:07.653 01:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.653 1+0 records in 00:06:07.653 1+0 records out 00:06:07.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378737 s, 10.8 MB/s 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:07.653 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.911 1+0 records in 00:06:07.911 1+0 records out 00:06:07.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073263 s, 5.6 MB/s 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:07.911 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.169 1+0 records in 00:06:08.169 1+0 records out 00:06:08.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549904 s, 7.4 MB/s 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:08.169 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd0", 00:06:08.427 "bdev_name": "Nvme0n1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd1", 00:06:08.427 "bdev_name": "Nvme1n1p1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd2", 00:06:08.427 "bdev_name": "Nvme1n1p2" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd3", 00:06:08.427 "bdev_name": "Nvme2n1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd4", 00:06:08.427 "bdev_name": "Nvme2n2" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd5", 00:06:08.427 "bdev_name": "Nvme2n3" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd6", 00:06:08.427 "bdev_name": "Nvme3n1" 00:06:08.427 } 00:06:08.427 ]' 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd0", 00:06:08.427 "bdev_name": "Nvme0n1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd1", 00:06:08.427 "bdev_name": "Nvme1n1p1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd2", 00:06:08.427 "bdev_name": "Nvme1n1p2" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd3", 00:06:08.427 "bdev_name": "Nvme2n1" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd4", 00:06:08.427 "bdev_name": "Nvme2n2" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd5", 00:06:08.427 "bdev_name": "Nvme2n3" 00:06:08.427 }, 00:06:08.427 { 00:06:08.427 "nbd_device": "/dev/nbd6", 00:06:08.427 "bdev_name": "Nvme3n1" 00:06:08.427 } 00:06:08.427 ]' 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.427 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.685 01:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.943 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.204 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.462 01:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.719 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.720 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.978 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:10.236 /dev/nbd0 00:06:10.236 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.495 1+0 records in 00:06:10.495 1+0 records out 00:06:10.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048583 s, 8.4 MB/s 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:10.495 /dev/nbd1 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.495 1+0 records in 00:06:10.495 1+0 records out 00:06:10.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205092 s, 20.0 MB/s 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:10.495 01:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:10.753 /dev/nbd10 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.753 1+0 records in 00:06:10.753 1+0 records out 00:06:10.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808546 s, 5.1 MB/s 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:10.753 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:11.020 /dev/nbd11 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.020 1+0 records in 00:06:11.020 1+0 records out 00:06:11.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111982 s, 3.7 MB/s 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:11.020 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:11.282 /dev/nbd12 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.282 1+0 records in 00:06:11.282 1+0 records out 00:06:11.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571262 s, 7.2 MB/s 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:11.282 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:11.541 /dev/nbd13 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.541 1+0 records in 00:06:11.541 1+0 records out 00:06:11.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556939 s, 7.4 MB/s 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:11.541 01:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:11.799 /dev/nbd14 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.799 1+0 records in 00:06:11.799 1+0 records out 00:06:11.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511749 s, 8.0 MB/s 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.799 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.800 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd0", 00:06:12.058 "bdev_name": "Nvme0n1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd1", 00:06:12.058 "bdev_name": "Nvme1n1p1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd10", 00:06:12.058 "bdev_name": "Nvme1n1p2" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd11", 00:06:12.058 "bdev_name": "Nvme2n1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd12", 00:06:12.058 "bdev_name": "Nvme2n2" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd13", 00:06:12.058 "bdev_name": "Nvme2n3" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd14", 00:06:12.058 "bdev_name": "Nvme3n1" 00:06:12.058 } 00:06:12.058 ]' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd0", 00:06:12.058 "bdev_name": "Nvme0n1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd1", 00:06:12.058 "bdev_name": "Nvme1n1p1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd10", 00:06:12.058 "bdev_name": "Nvme1n1p2" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd11", 00:06:12.058 "bdev_name": "Nvme2n1" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd12", 00:06:12.058 "bdev_name": "Nvme2n2" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd13", 00:06:12.058 "bdev_name": "Nvme2n3" 00:06:12.058 }, 00:06:12.058 { 00:06:12.058 "nbd_device": "/dev/nbd14", 00:06:12.058 "bdev_name": "Nvme3n1" 00:06:12.058 } 00:06:12.058 ]' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.058 /dev/nbd1 00:06:12.058 /dev/nbd10 00:06:12.058 /dev/nbd11 00:06:12.058 /dev/nbd12 00:06:12.058 /dev/nbd13 00:06:12.058 /dev/nbd14' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.058 /dev/nbd1 00:06:12.058 /dev/nbd10 00:06:12.058 /dev/nbd11 00:06:12.058 /dev/nbd12 00:06:12.058 /dev/nbd13 00:06:12.058 /dev/nbd14' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:12.058 256+0 records in 00:06:12.058 256+0 records out 00:06:12.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115329 s, 90.9 MB/s 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.058 256+0 records in 00:06:12.058 256+0 records out 00:06:12.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0769078 s, 13.6 MB/s 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.058 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.058 256+0 records in 00:06:12.058 256+0 records out 00:06:12.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103051 s, 10.2 MB/s 00:06:12.059 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.059 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:12.317 256+0 records in 00:06:12.317 256+0 records out 00:06:12.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0782818 s, 13.4 MB/s 00:06:12.317 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.317 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:12.317 256+0 records in 00:06:12.317 256+0 records out 00:06:12.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145543 s, 7.2 MB/s 00:06:12.317 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.317 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:12.575 256+0 records in 00:06:12.575 256+0 records out 00:06:12.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0768978 s, 13.6 MB/s 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:12.575 256+0 records in 00:06:12.575 256+0 records out 00:06:12.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0765413 s, 13.7 MB/s 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:12.575 256+0 records in 00:06:12.575 256+0 records out 00:06:12.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.076679 s, 13.7 MB/s 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.575 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.833 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.122 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:13.382 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.641 01:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.641 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.899 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.157 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:14.416 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:14.674 malloc_lvol_verify 00:06:14.674 01:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:14.674 f38a7908-1410-4c6d-8e85-f12725763c6b 00:06:14.932 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:14.932 3c570cdb-7357-4e4c-80b7-6acea0b2974c 00:06:14.932 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:15.191 /dev/nbd0 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:15.191 mke2fs 1.47.0 (5-Feb-2023) 00:06:15.191 Discarding device blocks: 0/4096 done 00:06:15.191 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:15.191 00:06:15.191 Allocating group tables: 0/1 done 00:06:15.191 Writing inode tables: 0/1 done 00:06:15.191 Creating journal (1024 blocks): done 00:06:15.191 Writing superblocks and filesystem accounting information: 0/1 done 00:06:15.191 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.191 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.448 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61449 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61449 ']' 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61449 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61449 00:06:15.449 killing process with pid 61449 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61449' 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61449 00:06:15.449 01:26:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61449 00:06:16.017 01:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:16.017 00:06:16.017 real 0m10.371s 00:06:16.017 user 0m14.947s 00:06:16.017 sys 0m3.353s 00:06:16.017 01:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.017 01:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:16.017 ************************************ 00:06:16.017 END TEST bdev_nbd 00:06:16.017 ************************************ 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:16.017 skipping fio tests on NVMe due to multi-ns failures. 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:16.017 01:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:16.017 01:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:16.017 01:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.017 01:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.017 ************************************ 00:06:16.017 START TEST bdev_verify 00:06:16.017 ************************************ 00:06:16.017 01:26:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:16.274 [2024-11-17 01:26:24.489562] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:16.274 [2024-11-17 01:26:24.489676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:06:16.274 [2024-11-17 01:26:24.650043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.532 [2024-11-17 01:26:24.748635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.532 [2024-11-17 01:26:24.748749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.098 Running I/O for 5 seconds... 00:06:19.041 21056.00 IOPS, 82.25 MiB/s [2024-11-17T01:26:28.873Z] 22144.00 IOPS, 86.50 MiB/s [2024-11-17T01:26:29.803Z] 23168.00 IOPS, 90.50 MiB/s [2024-11-17T01:26:30.738Z] 22880.00 IOPS, 89.38 MiB/s [2024-11-17T01:26:30.738Z] 22336.00 IOPS, 87.25 MiB/s 00:06:22.279 Latency(us) 00:06:22.279 [2024-11-17T01:26:30.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0xbd0bd 00:06:22.279 Nvme0n1 : 5.08 1625.50 6.35 0.00 0.00 78258.50 13812.97 83482.78 00:06:22.279 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:22.279 Nvme0n1 : 5.11 1528.71 5.97 0.00 0.00 82943.99 14317.10 68560.74 00:06:22.279 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x4ff80 00:06:22.279 Nvme1n1p1 : 5.10 1630.76 6.37 0.00 0.00 78187.45 15930.29 74206.92 00:06:22.279 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:22.279 Nvme1n1p1 : 5.11 1527.85 5.97 0.00 0.00 82833.56 13712.15 69367.34 00:06:22.279 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x4ff7f 00:06:22.279 Nvme1n1p2 : 5.10 1630.19 6.37 0.00 0.00 78088.81 17039.36 71383.83 00:06:22.279 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:22.279 Nvme1n1p2 : 5.11 1527.05 5.97 0.00 0.00 82690.69 12804.73 70577.23 00:06:22.279 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x80000 00:06:22.279 Nvme2n1 : 5.11 1629.12 6.36 0.00 0.00 77946.97 17845.96 66544.25 00:06:22.279 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x80000 length 0x80000 00:06:22.279 Nvme2n1 : 5.12 1525.81 5.96 0.00 0.00 82564.44 9023.80 73400.32 00:06:22.279 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x80000 00:06:22.279 Nvme2n2 : 5.11 1628.19 6.36 0.00 0.00 77806.49 16837.71 66544.25 00:06:22.279 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x80000 length 0x80000 00:06:22.279 Nvme2n2 : 5.05 1521.26 5.94 0.00 0.00 83716.88 15627.82 85499.27 00:06:22.279 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x80000 00:06:22.279 Nvme2n3 : 5.11 1627.37 6.36 0.00 0.00 77638.13 15123.69 69770.63 00:06:22.279 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x80000 length 0x80000 00:06:22.279 Nvme2n3 : 5.09 1521.84 5.94 0.00 0.00 83384.25 13006.38 71787.13 00:06:22.279 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x0 length 0x20000 00:06:22.279 Nvme3n1 : 5.11 1626.69 6.35 0.00 0.00 77496.31 9427.10 73400.32 00:06:22.279 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.279 Verification LBA range: start 0x20000 length 0x20000 00:06:22.279 Nvme3n1 : 5.11 1529.11 5.97 0.00 0.00 83110.66 14014.62 65737.65 00:06:22.279 [2024-11-17T01:26:30.738Z] =================================================================================================================== 00:06:22.279 [2024-11-17T01:26:30.738Z] Total : 22079.45 86.25 0.00 0.00 80390.74 9023.80 85499.27 00:06:23.653 00:06:23.653 real 0m7.398s 00:06:23.653 user 0m13.911s 00:06:23.653 sys 0m0.203s 00:06:23.653 01:26:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.653 ************************************ 00:06:23.653 END TEST bdev_verify 00:06:23.653 ************************************ 00:06:23.654 01:26:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:23.654 01:26:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:23.654 01:26:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.654 01:26:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.654 01:26:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:23.654 ************************************ 00:06:23.654 START TEST bdev_verify_big_io 00:06:23.654 ************************************ 00:06:23.654 01:26:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:23.654 [2024-11-17 01:26:31.948884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:23.654 [2024-11-17 01:26:31.949011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:06:23.912 [2024-11-17 01:26:32.114900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.912 [2024-11-17 01:26:32.212924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.912 [2024-11-17 01:26:32.213097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.478 Running I/O for 5 seconds... 00:06:30.618 1355.00 IOPS, 84.69 MiB/s [2024-11-17T01:26:39.340Z] 3199.50 IOPS, 199.97 MiB/s 00:06:30.881 Latency(us) 00:06:30.881 [2024-11-17T01:26:39.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x0 length 0xbd0b 00:06:30.882 Nvme0n1 : 5.75 111.28 6.96 0.00 0.00 1076909.06 17241.01 1380893.93 00:06:30.882 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:30.882 Nvme0n1 : 5.99 117.52 7.34 0.00 0.00 925063.13 44161.18 1193763.45 00:06:30.882 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x0 length 0x4ff8 00:06:30.882 Nvme1n1p1 : 5.75 115.07 7.19 0.00 0.00 1018874.77 102034.51 1180857.90 00:06:30.882 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:30.882 Nvme1n1p1 : 6.06 126.68 7.92 0.00 0.00 832032.95 33675.42 1232480.10 00:06:30.882 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x0 length 0x4ff7 00:06:30.882 Nvme1n1p2 : 5.97 124.42 7.78 0.00 0.00 918874.29 71383.83 967916.31 00:06:30.882 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:30.882 Nvme1n1p2 : 6.18 154.21 9.64 0.00 0.00 659884.80 604.95 1503496.66 00:06:30.882 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x0 length 0x8000 00:06:30.882 Nvme2n1 : 5.97 124.56 7.79 0.00 0.00 884409.68 71787.13 961463.53 00:06:30.882 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x8000 length 0x8000 00:06:30.882 Nvme2n1 : 5.68 101.42 6.34 0.00 0.00 1209332.27 23794.61 1374441.16 00:06:30.882 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.882 Verification LBA range: start 0x0 length 0x8000 00:06:30.883 Nvme2n2 : 6.03 131.16 8.20 0.00 0.00 814865.05 55655.19 987274.63 00:06:30.883 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.883 Verification LBA range: start 0x8000 length 0x8000 00:06:30.883 Nvme2n2 : 5.76 105.06 6.57 0.00 0.00 1141315.31 77030.01 1219574.55 00:06:30.883 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.883 Verification LBA range: start 0x0 length 0x8000 00:06:30.883 Nvme2n3 : 6.18 142.65 8.92 0.00 0.00 723844.72 41338.09 1561571.64 00:06:30.883 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.883 Verification LBA range: start 0x8000 length 0x8000 00:06:30.883 Nvme2n3 : 5.87 108.99 6.81 0.00 0.00 1066464.18 109697.18 1129235.69 00:06:30.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:30.883 Verification LBA range: start 0x0 length 0x2000 00:06:30.883 Nvme3n1 : 6.25 143.31 8.96 0.00 0.00 704203.23 460.01 2310093.59 00:06:30.883 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:30.883 Verification LBA range: start 0x2000 length 0x2000 00:06:30.883 Nvme3n1 : 5.88 108.93 6.81 0.00 0.00 1025861.47 112116.97 1161499.57 00:06:30.883 [2024-11-17T01:26:39.342Z] =================================================================================================================== 00:06:30.883 [2024-11-17T01:26:39.342Z] Total : 1715.25 107.20 0.00 0.00 904321.78 460.01 2310093.59 00:06:32.260 ************************************ 00:06:32.260 END TEST bdev_verify_big_io 00:06:32.260 ************************************ 00:06:32.260 00:06:32.260 real 0m8.817s 00:06:32.260 user 0m16.261s 00:06:32.260 sys 0m0.231s 00:06:32.260 01:26:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.260 01:26:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 01:26:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.519 01:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:32.519 01:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.519 01:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 ************************************ 00:06:32.519 START TEST bdev_write_zeroes 00:06:32.519 ************************************ 00:06:32.519 01:26:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.519 [2024-11-17 01:26:40.813911] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:32.519 [2024-11-17 01:26:40.814005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:06:32.519 [2024-11-17 01:26:40.965323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.777 [2024-11-17 01:26:41.048500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.342 Running I/O for 1 seconds... 00:06:34.273 67648.00 IOPS, 264.25 MiB/s 00:06:34.273 Latency(us) 00:06:34.274 [2024-11-17T01:26:42.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.274 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme0n1 : 1.02 9624.94 37.60 0.00 0.00 13271.07 10989.88 28230.89 00:06:34.274 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme1n1p1 : 1.03 9612.97 37.55 0.00 0.00 13271.19 10687.41 28230.89 00:06:34.274 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme1n1p2 : 1.03 9601.18 37.50 0.00 0.00 13217.17 10838.65 27021.00 00:06:34.274 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme2n1 : 1.03 9590.33 37.46 0.00 0.00 13179.87 10939.47 25407.80 00:06:34.274 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme2n2 : 1.03 9579.45 37.42 0.00 0.00 13150.95 10536.17 25710.28 00:06:34.274 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme2n3 : 1.03 9568.65 37.38 0.00 0.00 13142.46 10586.58 26214.40 00:06:34.274 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.274 Nvme3n1 : 1.03 9557.86 37.34 0.00 0.00 13116.87 9275.86 27424.30 00:06:34.274 [2024-11-17T01:26:42.733Z] =================================================================================================================== 00:06:34.274 [2024-11-17T01:26:42.733Z] Total : 67135.39 262.25 0.00 0.00 13192.80 9275.86 28230.89 00:06:35.206 00:06:35.206 real 0m2.576s 00:06:35.206 user 0m2.294s 00:06:35.206 sys 0m0.168s 00:06:35.206 01:26:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.206 ************************************ 00:06:35.206 END TEST bdev_write_zeroes 00:06:35.206 ************************************ 00:06:35.206 01:26:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:35.206 01:26:43 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.206 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:35.206 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.206 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.206 ************************************ 00:06:35.206 START TEST bdev_json_nonenclosed 00:06:35.206 ************************************ 00:06:35.206 01:26:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.206 [2024-11-17 01:26:43.436577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:35.206 [2024-11-17 01:26:43.436695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62115 ] 00:06:35.206 [2024-11-17 01:26:43.597139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.464 [2024-11-17 01:26:43.694087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.464 [2024-11-17 01:26:43.694166] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:35.464 [2024-11-17 01:26:43.694183] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:35.464 [2024-11-17 01:26:43.694192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.464 00:06:35.464 real 0m0.497s 00:06:35.464 user 0m0.305s 00:06:35.464 sys 0m0.088s 00:06:35.464 01:26:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.464 01:26:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:35.464 ************************************ 00:06:35.464 END TEST bdev_json_nonenclosed 00:06:35.464 ************************************ 00:06:35.464 01:26:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.464 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:35.464 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.464 01:26:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.464 ************************************ 00:06:35.464 START TEST bdev_json_nonarray 00:06:35.464 ************************************ 00:06:35.464 01:26:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.722 [2024-11-17 01:26:43.977364] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:35.722 [2024-11-17 01:26:43.977481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62135 ] 00:06:35.722 [2024-11-17 01:26:44.137831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.980 [2024-11-17 01:26:44.234783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.980 [2024-11-17 01:26:44.234875] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:35.980 [2024-11-17 01:26:44.234893] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:35.980 [2024-11-17 01:26:44.234902] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.980 00:06:35.980 real 0m0.494s 00:06:35.980 user 0m0.299s 00:06:35.980 sys 0m0.091s 00:06:35.980 01:26:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.980 01:26:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 ************************************ 00:06:35.980 END TEST bdev_json_nonarray 00:06:35.980 ************************************ 00:06:36.238 01:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:36.238 01:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:36.238 01:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:36.238 01:26:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.238 01:26:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.238 01:26:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.238 ************************************ 00:06:36.238 START TEST bdev_gpt_uuid 00:06:36.238 ************************************ 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:36.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62166 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62166 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62166 ']' 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.238 01:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:36.238 [2024-11-17 01:26:44.522916] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:36.238 [2024-11-17 01:26:44.523035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62166 ] 00:06:36.238 [2024-11-17 01:26:44.682284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.496 [2024-11-17 01:26:44.781743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.062 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.062 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:06:37.062 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.062 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.062 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:37.321 Some configs were skipped because the RPC state that can call them passed over. 00:06:37.321 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.321 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:37.321 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.321 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:37.321 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:37.322 { 00:06:37.322 "name": "Nvme1n1p1", 00:06:37.322 "aliases": [ 00:06:37.322 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:37.322 ], 00:06:37.322 "product_name": "GPT Disk", 00:06:37.322 "block_size": 4096, 00:06:37.322 "num_blocks": 655104, 00:06:37.322 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:37.322 "assigned_rate_limits": { 00:06:37.322 "rw_ios_per_sec": 0, 00:06:37.322 "rw_mbytes_per_sec": 0, 00:06:37.322 "r_mbytes_per_sec": 0, 00:06:37.322 "w_mbytes_per_sec": 0 00:06:37.322 }, 00:06:37.322 "claimed": false, 00:06:37.322 "zoned": false, 00:06:37.322 "supported_io_types": { 00:06:37.322 "read": true, 00:06:37.322 "write": true, 00:06:37.322 "unmap": true, 00:06:37.322 "flush": true, 00:06:37.322 "reset": true, 00:06:37.322 "nvme_admin": false, 00:06:37.322 "nvme_io": false, 00:06:37.322 "nvme_io_md": false, 00:06:37.322 "write_zeroes": true, 00:06:37.322 "zcopy": false, 00:06:37.322 "get_zone_info": false, 00:06:37.322 "zone_management": false, 00:06:37.322 "zone_append": false, 00:06:37.322 "compare": true, 00:06:37.322 "compare_and_write": false, 00:06:37.322 "abort": true, 00:06:37.322 "seek_hole": false, 00:06:37.322 "seek_data": false, 00:06:37.322 "copy": true, 00:06:37.322 "nvme_iov_md": false 00:06:37.322 }, 00:06:37.322 "driver_specific": { 00:06:37.322 "gpt": { 00:06:37.322 "base_bdev": "Nvme1n1", 00:06:37.322 "offset_blocks": 256, 00:06:37.322 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:37.322 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:37.322 "partition_name": "SPDK_TEST_first" 00:06:37.322 } 00:06:37.322 } 00:06:37.322 } 00:06:37.322 ]' 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:37.322 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:37.580 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:37.580 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:37.581 { 00:06:37.581 "name": "Nvme1n1p2", 00:06:37.581 "aliases": [ 00:06:37.581 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:37.581 ], 00:06:37.581 "product_name": "GPT Disk", 00:06:37.581 "block_size": 4096, 00:06:37.581 "num_blocks": 655103, 00:06:37.581 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:37.581 "assigned_rate_limits": { 00:06:37.581 "rw_ios_per_sec": 0, 00:06:37.581 "rw_mbytes_per_sec": 0, 00:06:37.581 "r_mbytes_per_sec": 0, 00:06:37.581 "w_mbytes_per_sec": 0 00:06:37.581 }, 00:06:37.581 "claimed": false, 00:06:37.581 "zoned": false, 00:06:37.581 "supported_io_types": { 00:06:37.581 "read": true, 00:06:37.581 "write": true, 00:06:37.581 "unmap": true, 00:06:37.581 "flush": true, 00:06:37.581 "reset": true, 00:06:37.581 "nvme_admin": false, 00:06:37.581 "nvme_io": false, 00:06:37.581 "nvme_io_md": false, 00:06:37.581 "write_zeroes": true, 00:06:37.581 "zcopy": false, 00:06:37.581 "get_zone_info": false, 00:06:37.581 "zone_management": false, 00:06:37.581 "zone_append": false, 00:06:37.581 "compare": true, 00:06:37.581 "compare_and_write": false, 00:06:37.581 "abort": true, 00:06:37.581 "seek_hole": false, 00:06:37.581 "seek_data": false, 00:06:37.581 "copy": true, 00:06:37.581 "nvme_iov_md": false 00:06:37.581 }, 00:06:37.581 "driver_specific": { 00:06:37.581 "gpt": { 00:06:37.581 "base_bdev": "Nvme1n1", 00:06:37.581 "offset_blocks": 655360, 00:06:37.581 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:37.581 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:37.581 "partition_name": "SPDK_TEST_second" 00:06:37.581 } 00:06:37.581 } 00:06:37.581 } 00:06:37.581 ]' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62166 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62166 ']' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62166 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62166 00:06:37.581 killing process with pid 62166 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62166' 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62166 00:06:37.581 01:26:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62166 00:06:39.481 ************************************ 00:06:39.481 END TEST bdev_gpt_uuid 00:06:39.481 ************************************ 00:06:39.481 00:06:39.481 real 0m2.974s 00:06:39.481 user 0m3.123s 00:06:39.481 sys 0m0.356s 00:06:39.481 01:26:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.481 01:26:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:39.481 01:26:47 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.481 Waiting for block devices as requested 00:06:39.481 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.739 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.739 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.739 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.068 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:45.068 01:26:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:45.068 01:26:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:45.327 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:45.327 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:45.327 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:45.327 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:45.327 01:26:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:45.327 00:06:45.327 real 0m54.812s 00:06:45.327 user 1m9.898s 00:06:45.327 sys 0m7.367s 00:06:45.327 01:26:53 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.327 ************************************ 00:06:45.327 END TEST blockdev_nvme_gpt 00:06:45.327 01:26:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 01:26:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:45.327 01:26:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.327 01:26:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.327 01:26:53 -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 START TEST nvme 00:06:45.327 ************************************ 00:06:45.327 01:26:53 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:45.586 * Looking for test storage... 00:06:45.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.586 01:26:53 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.586 01:26:53 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.586 01:26:53 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.586 01:26:53 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.586 01:26:53 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.586 01:26:53 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:45.586 01:26:53 nvme -- scripts/common.sh@345 -- # : 1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.586 01:26:53 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.586 01:26:53 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@353 -- # local d=1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.586 01:26:53 nvme -- scripts/common.sh@355 -- # echo 1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.586 01:26:53 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@353 -- # local d=2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.586 01:26:53 nvme -- scripts/common.sh@355 -- # echo 2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.586 01:26:53 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.586 01:26:53 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.586 01:26:53 nvme -- scripts/common.sh@368 -- # return 0 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.586 --rc genhtml_branch_coverage=1 00:06:45.586 --rc genhtml_function_coverage=1 00:06:45.586 --rc genhtml_legend=1 00:06:45.586 --rc geninfo_all_blocks=1 00:06:45.586 --rc geninfo_unexecuted_blocks=1 00:06:45.586 00:06:45.586 ' 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.586 --rc genhtml_branch_coverage=1 00:06:45.586 --rc genhtml_function_coverage=1 00:06:45.586 --rc genhtml_legend=1 00:06:45.586 --rc geninfo_all_blocks=1 00:06:45.586 --rc geninfo_unexecuted_blocks=1 00:06:45.586 00:06:45.586 ' 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.586 --rc genhtml_branch_coverage=1 00:06:45.586 --rc genhtml_function_coverage=1 00:06:45.586 --rc genhtml_legend=1 00:06:45.586 --rc geninfo_all_blocks=1 00:06:45.586 --rc geninfo_unexecuted_blocks=1 00:06:45.586 00:06:45.586 ' 00:06:45.586 01:26:53 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.586 --rc genhtml_branch_coverage=1 00:06:45.586 --rc genhtml_function_coverage=1 00:06:45.586 --rc genhtml_legend=1 00:06:45.586 --rc geninfo_all_blocks=1 00:06:45.586 --rc geninfo_unexecuted_blocks=1 00:06:45.586 00:06:45.586 ' 00:06:45.586 01:26:53 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.411 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.411 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.411 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.411 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.411 01:26:54 nvme -- nvme/nvme.sh@79 -- # uname 00:06:46.411 01:26:54 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:06:46.411 01:26:54 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:06:46.411 01:26:54 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:06:46.411 Waiting for stub to ready for secondary processes... 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1075 -- # stubpid=62801 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62801 ]] 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:06:46.411 01:26:54 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:06:46.669 [2024-11-17 01:26:54.901397] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:46.669 [2024-11-17 01:26:54.901614] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:06:47.236 [2024-11-17 01:26:55.635838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.494 [2024-11-17 01:26:55.726976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.494 [2024-11-17 01:26:55.727230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.494 [2024-11-17 01:26:55.727306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.494 [2024-11-17 01:26:55.740477] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:06:47.494 [2024-11-17 01:26:55.740618] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:47.494 [2024-11-17 01:26:55.752439] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:06:47.494 [2024-11-17 01:26:55.752641] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:06:47.494 [2024-11-17 01:26:55.754474] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:47.494 [2024-11-17 01:26:55.754694] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:06:47.494 [2024-11-17 01:26:55.754832] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:06:47.495 [2024-11-17 01:26:55.756448] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:47.495 [2024-11-17 01:26:55.756649] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:06:47.495 [2024-11-17 01:26:55.756769] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:06:47.495 [2024-11-17 01:26:55.758709] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:47.495 [2024-11-17 01:26:55.758975] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:06:47.495 [2024-11-17 01:26:55.759108] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:06:47.495 [2024-11-17 01:26:55.759168] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:06:47.495 [2024-11-17 01:26:55.759228] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:06:47.495 01:26:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:47.495 01:26:55 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:06:47.495 done. 00:06:47.495 01:26:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:47.495 01:26:55 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:06:47.495 01:26:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.495 01:26:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.495 ************************************ 00:06:47.495 START TEST nvme_reset 00:06:47.495 ************************************ 00:06:47.495 01:26:55 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:47.753 Initializing NVMe Controllers 00:06:47.753 Skipping QEMU NVMe SSD at 0000:00:10.0 00:06:47.753 Skipping QEMU NVMe SSD at 0000:00:11.0 00:06:47.753 Skipping QEMU NVMe SSD at 0000:00:13.0 00:06:47.753 Skipping QEMU NVMe SSD at 0000:00:12.0 00:06:47.753 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:06:47.753 00:06:47.753 real 0m0.225s 00:06:47.753 user 0m0.075s 00:06:47.753 sys 0m0.099s 00:06:47.753 01:26:56 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.753 01:26:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 ************************************ 00:06:47.753 END TEST nvme_reset 00:06:47.753 ************************************ 00:06:47.753 01:26:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:06:47.753 01:26:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.753 01:26:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.753 01:26:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 ************************************ 00:06:47.753 START TEST nvme_identify 00:06:47.753 ************************************ 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:06:47.753 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:06:47.753 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:06:47.753 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:06:47.753 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:47.753 01:26:56 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:47.753 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:06:48.015 ===================================================== 00:06:48.015 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:48.015 ===================================================== 00:06:48.015 Controller Capabilities/Features 00:06:48.015 ================================ 00:06:48.015 Vendor ID: 1b36 00:06:48.015 Subsystem Vendor ID: 1af4 00:06:48.015 Serial Number: 12340 00:06:48.015 Model Number: QEMU NVMe Ctrl 00:06:48.015 Firmware Version: 8.0.0 00:06:48.015 Recommended Arb Burst: 6 00:06:48.015 IEEE OUI Identifier: 00 54 52 00:06:48.015 Multi-path I/O 00:06:48.015 May have multiple subsystem ports: No 00:06:48.015 May have multiple controllers: No 00:06:48.015 Associated with SR-IOV VF: No 00:06:48.015 Max Data Transfer Size: 524288 00:06:48.015 Max Number of Namespaces: 256 00:06:48.015 Max Number of I/O Queues: 64 00:06:48.015 NVMe Specification Version (VS): 1.4 00:06:48.015 NVMe Specification Version (Identify): 1.4 00:06:48.015 Maximum Queue Entries: 2048 00:06:48.015 Contiguous Queues Required: Yes 00:06:48.015 Arbitration Mechanisms Supported 00:06:48.015 Weighted Round Robin: Not Supported 00:06:48.015 Vendor Specific: Not Supported 00:06:48.015 Reset Timeout: 7500 ms 00:06:48.015 Doorbell Stride: 4 bytes 00:06:48.015 NVM Subsystem Reset: Not Supported 00:06:48.015 Command Sets Supported 00:06:48.015 NVM Command Set: Supported 00:06:48.015 Boot Partition: Not Supported 00:06:48.015 Memory Page Size Minimum: 4096 bytes 00:06:48.015 Memory Page Size Maximum: 65536 bytes 00:06:48.015 Persistent Memory Region: Not Supported 00:06:48.015 Optional Asynchronous Events Supported 00:06:48.015 Namespace Attribute Notices: Supported 00:06:48.015 Firmware Activation Notices: Not Supported 00:06:48.015 ANA Change Notices: Not Supported 00:06:48.015 PLE Aggregate Log Change Notices: Not Supported 00:06:48.015 LBA Status Info Alert Notices: Not Supported 00:06:48.015 EGE Aggregate Log Change Notices: Not Supported 00:06:48.015 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.015 Zone Descriptor Change Notices: Not Supported 00:06:48.015 Discovery Log Change Notices: Not Supported 00:06:48.015 Controller Attributes 00:06:48.015 128-bit Host Identifier: Not Supported 00:06:48.015 Non-Operational Permissive Mode: Not Supported 00:06:48.015 NVM Sets: Not Supported 00:06:48.015 Read Recovery Levels: Not Supported 00:06:48.015 Endurance Groups: Not Supported 00:06:48.015 Predictable Latency Mode: Not Supported 00:06:48.015 Traffic Based Keep ALive: Not Supported 00:06:48.015 Namespace Granularity: Not Supported 00:06:48.015 SQ Associations: Not Supported 00:06:48.015 UUID List: Not Supported 00:06:48.015 Multi-Domain Subsystem: Not Supported 00:06:48.015 Fixed Capacity Management: Not Supported 00:06:48.015 Variable Capacity Management: Not Supported 00:06:48.015 Delete Endurance Group: Not Supported 00:06:48.015 Delete NVM Set: Not Supported 00:06:48.015 Extended LBA Formats Supported: Supported 00:06:48.015 Flexible Data Placement Supported: Not Supported 00:06:48.015 00:06:48.015 Controller Memory Buffer Support 00:06:48.015 ================================ 00:06:48.015 Supported: No 00:06:48.015 00:06:48.015 Persistent Memory Region Support 00:06:48.015 ================================ 00:06:48.015 Supported: No 00:06:48.015 00:06:48.015 Admin Command Set Attributes 00:06:48.015 ============================ 00:06:48.015 Security Send/Receive: Not Supported 00:06:48.015 Format NVM: Supported 00:06:48.015 Firmware Activate/Download: Not Supported 00:06:48.015 Namespace Management: Supported 00:06:48.015 Device Self-Test: Not Supported 00:06:48.015 Directives: Supported 00:06:48.015 NVMe-MI: Not Supported 00:06:48.015 Virtualization Management: Not Supported 00:06:48.015 Doorbell Buffer Config: Supported 00:06:48.015 Get LBA Status Capability: Not Supported 00:06:48.015 Command & Feature Lockdown Capability: Not Supported 00:06:48.015 Abort Command Limit: 4 00:06:48.015 Async Event Request Limit: 4 00:06:48.015 Number of Firmware Slots: N/A 00:06:48.015 Firmware Slot 1 Read-Only: N/A 00:06:48.015 Firmware Activation Without Reset: N/A 00:06:48.015 Multiple Update Detection Support: N/A 00:06:48.015 Firmware Update Granularity: No Information Provided 00:06:48.015 Per-Namespace SMART Log: Yes 00:06:48.015 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.015 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:48.015 Command Effects Log Page: Supported 00:06:48.015 Get Log Page Extended Data: Supported 00:06:48.015 Telemetry Log Pages: Not Supported 00:06:48.015 Persistent Event Log Pages: Not Supported 00:06:48.015 Supported Log Pages Log Page: May Support 00:06:48.015 Commands Supported & Effects Log Page: Not Supported 00:06:48.015 Feature Identifiers & Effects Log Page:May Support 00:06:48.015 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.015 Data Area 4 for Telemetry Log: Not Supported 00:06:48.015 Error Log Page Entries Supported: 1 00:06:48.015 Keep Alive: Not Supported 00:06:48.015 00:06:48.015 NVM Command Set Attributes 00:06:48.015 ========================== 00:06:48.015 Submission Queue Entry Size 00:06:48.015 Max: 64 00:06:48.015 Min: 64 00:06:48.015 Completion Queue Entry Size 00:06:48.015 Max: 16 00:06:48.015 Min: 16 00:06:48.015 Number of Namespaces: 256 00:06:48.015 Compare Command: Supported 00:06:48.015 Write Uncorrectable Command: Not Supported 00:06:48.015 Dataset Management Command: Supported 00:06:48.015 Write Zeroes Command: Supported 00:06:48.015 Set Features Save Field: Supported 00:06:48.015 Reservations: Not Supported 00:06:48.015 Timestamp: Supported 00:06:48.015 Copy: Supported 00:06:48.015 Volatile Write Cache: Present 00:06:48.015 Atomic Write Unit (Normal): 1 00:06:48.015 Atomic Write Unit (PFail): 1 00:06:48.015 Atomic Compare & Write Unit: 1 00:06:48.015 Fused Compare & Write: Not Supported 00:06:48.015 Scatter-Gather List 00:06:48.015 SGL Command Set: Supported 00:06:48.015 SGL Keyed: Not Supported 00:06:48.015 SGL Bit Bucket Descriptor: Not Supported 00:06:48.015 SGL Metadata Pointer: Not Supported 00:06:48.015 Oversized SGL: Not Supported 00:06:48.015 SGL Metadata Address: Not Supported 00:06:48.015 SGL Offset: Not Supported 00:06:48.015 Transport SGL Data Block: Not Supported 00:06:48.015 Replay Protected Memory Block: Not Supported 00:06:48.015 00:06:48.015 Firmware Slot Information 00:06:48.015 ========================= 00:06:48.015 Active slot: 1 00:06:48.015 Slot 1 Firmware Revision: 1.0 00:06:48.015 00:06:48.015 00:06:48.015 Commands Supported and Effects 00:06:48.015 ============================== 00:06:48.015 Admin Commands 00:06:48.015 -------------- 00:06:48.015 Delete I/O Submission Queue (00h): Supported 00:06:48.015 Create I/O Submission Queue (01h): Supported 00:06:48.015 Get Log Page (02h): Supported 00:06:48.015 Delete I/O Completion Queue (04h): Supported 00:06:48.015 Create I/O Completion Queue (05h): Supported 00:06:48.015 Identify (06h): Supported 00:06:48.015 Abort (08h): Supported 00:06:48.015 Set Features (09h): Supported 00:06:48.015 Get Features (0Ah): Supported 00:06:48.015 Asynchronous Event Request (0Ch): Supported 00:06:48.015 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.015 Directive Send (19h): Supported 00:06:48.015 Directive Receive (1Ah): Supported 00:06:48.015 Virtualization Management (1Ch): Supported 00:06:48.015 Doorbell Buffer Config (7Ch): Supported 00:06:48.015 Format NVM (80h): Supported LBA-Change 00:06:48.015 I/O Commands 00:06:48.015 ------------ 00:06:48.015 Flush (00h): Supported LBA-Change 00:06:48.015 Write (01h): Supported LBA-Change 00:06:48.015 Read (02h): Supported 00:06:48.015 Compare (05h): Supported 00:06:48.015 Write Zeroes (08h): Supported LBA-Change 00:06:48.015 Dataset Management (09h): Supported LBA-Change 00:06:48.015 Unknown (0Ch): Supported 00:06:48.015 Unknown (12h): Supported 00:06:48.015 Copy (19h): Supported LBA-Change 00:06:48.016 Unknown (1Dh): Supported LBA-Change 00:06:48.016 00:06:48.016 Error Log 00:06:48.016 ========= 00:06:48.016 00:06:48.016 Arbitration 00:06:48.016 =========== 00:06:48.016 Arbitration Burst: no limit 00:06:48.016 00:06:48.016 Power Management 00:06:48.016 ================ 00:06:48.016 Number of Power States: 1 00:06:48.016 Current Power State: Power State #0 00:06:48.016 Power State #0: 00:06:48.016 Max Power: 25.00 W 00:06:48.016 Non-Operational State: Operational 00:06:48.016 Entry Latency: 16 microseconds 00:06:48.016 Exit Latency: 4 microseconds 00:06:48.016 Relative Read Throughput: 0 00:06:48.016 Relative Read Latency: 0 00:06:48.016 Relative Write Throughput: 0 00:06:48.016 Relative Write Latency: 0 00:06:48.016 Idle Power: Not Reported 00:06:48.016 Active Power: Not Reported 00:06:48.016 Non-Operational Permissive Mode: Not Supported 00:06:48.016 00:06:48.016 Health Information 00:06:48.016 ================== 00:06:48.016 Critical Warnings: 00:06:48.016 Available Spare Space: OK 00:06:48.016 Temperature: OK 00:06:48.016 Device Reliability: OK 00:06:48.016 Read Only: No 00:06:48.016 Volatile Memory Backup: OK 00:06:48.016 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.016 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.016 Available Spare: 0% 00:06:48.016 Available Spare Threshold: 0% 00:06:48.016 Life Percentage Used: 0% 00:06:48.016 Data Units Read: 689 00:06:48.016 Data Units Written: 617 00:06:48.016 Host Read Commands: 37380 00:06:48.016 Host Write Commands: 37166 00:06:48.016 Controller Busy Time: 0 minutes 00:06:48.016 Power Cycles: 0 00:06:48.016 Power On Hours: 0 hours 00:06:48.016 Unsafe Shutdowns: 0 00:06:48.016 Unrecoverable Media Errors: 0 00:06:48.016 Lifetime Error Log Entries: 0 00:06:48.016 Warning Temperature Time: 0 minutes 00:06:48.016 Critical Temperature Time: 0 minutes 00:06:48.016 00:06:48.016 Number of Queues 00:06:48.016 ================ 00:06:48.016 Number of I/O Submission Queues: 64 00:06:48.016 Number of I/O Completion Queues: 64 00:06:48.016 00:06:48.016 ZNS Specific Controller Data 00:06:48.016 ============================ 00:06:48.016 Zone Append Size Limit: 0 00:06:48.016 00:06:48.016 00:06:48.016 Active Namespaces 00:06:48.016 ================= 00:06:48.016 Namespace ID:1 00:06:48.016 Error Recovery Timeout: Unlimited 00:06:48.016 Command Set Identifier: NVM (00h) 00:06:48.016 Deallocate: Supported 00:06:48.016 Deallocated/Unwritten Error: Supported 00:06:48.016 Deallocated Read Value: All 0x00 00:06:48.016 Deallocate in Write Zeroes: Not Supported 00:06:48.016 Deallocated Guard Field: 0xFFFF 00:06:48.016 Flush: Supported 00:06:48.016 Reservation: Not Supported 00:06:48.016 Metadata Transferred as: Separate Metadata Buffer 00:06:48.016 Namespace Sharing Capabilities: Private 00:06:48.016 Size (in LBAs): 1548666 (5GiB) 00:06:48.016 Capacity (in LBAs): 1548666 (5GiB) 00:06:48.016 Utilization (in LBAs): 1548666 (5GiB) 00:06:48.016 Thin Provisioning: Not Supported 00:06:48.016 Per-NS Atomic Units: No 00:06:48.016 Maximum Single Source Range Length: 128 00:06:48.016 Maximum Copy Length: 128 00:06:48.016 Maximum Source Range Count: 128 00:06:48.016 NGUID/EUI64 Never Reused: No 00:06:48.016 Namespace Write Protected: No 00:06:48.016 Number of LBA Formats: 8 00:06:48.016 Current LBA Format: LBA Format #07 00:06:48.016 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.016 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.016 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.016 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.016 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.016 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.016 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.016 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.016 00:06:48.016 NVM Specific Namespace Data 00:06:48.016 =========================== 00:06:48.016 Logical Block Storage Tag Mask: 0 00:06:48.016 Protection Information Capabilities: 00:06:48.016 16b Guard Protection Information Storage Tag Support: No 00:06:48.016 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.016 Storage Tag Check Read Support: No 00:06:48.016 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.016 ===================================================== 00:06:48.016 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:48.016 ===================================================== 00:06:48.016 Controller Capabilities/Features 00:06:48.016 ================================ 00:06:48.016 Vendor ID: [2024-11-17 01:26:56.393301] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62822 terminated unexpected 00:06:48.016 [2024-11-17 01:26:56.395097] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62822 terminated unexpected 00:06:48.016 1b36 00:06:48.016 Subsystem Vendor ID: 1af4 00:06:48.016 Serial Number: 12341 00:06:48.016 Model Number: QEMU NVMe Ctrl 00:06:48.016 Firmware Version: 8.0.0 00:06:48.016 Recommended Arb Burst: 6 00:06:48.016 IEEE OUI Identifier: 00 54 52 00:06:48.016 Multi-path I/O 00:06:48.016 May have multiple subsystem ports: No 00:06:48.016 May have multiple controllers: No 00:06:48.016 Associated with SR-IOV VF: No 00:06:48.016 Max Data Transfer Size: 524288 00:06:48.016 Max Number of Namespaces: 256 00:06:48.016 Max Number of I/O Queues: 64 00:06:48.016 NVMe Specification Version (VS): 1.4 00:06:48.016 NVMe Specification Version (Identify): 1.4 00:06:48.016 Maximum Queue Entries: 2048 00:06:48.016 Contiguous Queues Required: Yes 00:06:48.016 Arbitration Mechanisms Supported 00:06:48.016 Weighted Round Robin: Not Supported 00:06:48.016 Vendor Specific: Not Supported 00:06:48.016 Reset Timeout: 7500 ms 00:06:48.016 Doorbell Stride: 4 bytes 00:06:48.016 NVM Subsystem Reset: Not Supported 00:06:48.016 Command Sets Supported 00:06:48.016 NVM Command Set: Supported 00:06:48.016 Boot Partition: Not Supported 00:06:48.016 Memory Page Size Minimum: 4096 bytes 00:06:48.016 Memory Page Size Maximum: 65536 bytes 00:06:48.016 Persistent Memory Region: Not Supported 00:06:48.016 Optional Asynchronous Events Supported 00:06:48.016 Namespace Attribute Notices: Supported 00:06:48.016 Firmware Activation Notices: Not Supported 00:06:48.016 ANA Change Notices: Not Supported 00:06:48.016 PLE Aggregate Log Change Notices: Not Supported 00:06:48.016 LBA Status Info Alert Notices: Not Supported 00:06:48.016 EGE Aggregate Log Change Notices: Not Supported 00:06:48.016 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.016 Zone Descriptor Change Notices: Not Supported 00:06:48.016 Discovery Log Change Notices: Not Supported 00:06:48.016 Controller Attributes 00:06:48.016 128-bit Host Identifier: Not Supported 00:06:48.016 Non-Operational Permissive Mode: Not Supported 00:06:48.016 NVM Sets: Not Supported 00:06:48.016 Read Recovery Levels: Not Supported 00:06:48.016 Endurance Groups: Not Supported 00:06:48.016 Predictable Latency Mode: Not Supported 00:06:48.016 Traffic Based Keep ALive: Not Supported 00:06:48.016 Namespace Granularity: Not Supported 00:06:48.016 SQ Associations: Not Supported 00:06:48.016 UUID List: Not Supported 00:06:48.016 Multi-Domain Subsystem: Not Supported 00:06:48.016 Fixed Capacity Management: Not Supported 00:06:48.016 Variable Capacity Management: Not Supported 00:06:48.016 Delete Endurance Group: Not Supported 00:06:48.016 Delete NVM Set: Not Supported 00:06:48.016 Extended LBA Formats Supported: Supported 00:06:48.016 Flexible Data Placement Supported: Not Supported 00:06:48.016 00:06:48.016 Controller Memory Buffer Support 00:06:48.016 ================================ 00:06:48.016 Supported: No 00:06:48.016 00:06:48.016 Persistent Memory Region Support 00:06:48.016 ================================ 00:06:48.016 Supported: No 00:06:48.016 00:06:48.016 Admin Command Set Attributes 00:06:48.016 ============================ 00:06:48.016 Security Send/Receive: Not Supported 00:06:48.016 Format NVM: Supported 00:06:48.016 Firmware Activate/Download: Not Supported 00:06:48.016 Namespace Management: Supported 00:06:48.016 Device Self-Test: Not Supported 00:06:48.016 Directives: Supported 00:06:48.017 NVMe-MI: Not Supported 00:06:48.017 Virtualization Management: Not Supported 00:06:48.017 Doorbell Buffer Config: Supported 00:06:48.017 Get LBA Status Capability: Not Supported 00:06:48.017 Command & Feature Lockdown Capability: Not Supported 00:06:48.017 Abort Command Limit: 4 00:06:48.017 Async Event Request Limit: 4 00:06:48.017 Number of Firmware Slots: N/A 00:06:48.017 Firmware Slot 1 Read-Only: N/A 00:06:48.017 Firmware Activation Without Reset: N/A 00:06:48.017 Multiple Update Detection Support: N/A 00:06:48.017 Firmware Update Granularity: No Information Provided 00:06:48.017 Per-Namespace SMART Log: Yes 00:06:48.017 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.017 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:48.017 Command Effects Log Page: Supported 00:06:48.017 Get Log Page Extended Data: Supported 00:06:48.017 Telemetry Log Pages: Not Supported 00:06:48.017 Persistent Event Log Pages: Not Supported 00:06:48.017 Supported Log Pages Log Page: May Support 00:06:48.017 Commands Supported & Effects Log Page: Not Supported 00:06:48.017 Feature Identifiers & Effects Log Page:May Support 00:06:48.017 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.017 Data Area 4 for Telemetry Log: Not Supported 00:06:48.017 Error Log Page Entries Supported: 1 00:06:48.017 Keep Alive: Not Supported 00:06:48.017 00:06:48.017 NVM Command Set Attributes 00:06:48.017 ========================== 00:06:48.017 Submission Queue Entry Size 00:06:48.017 Max: 64 00:06:48.017 Min: 64 00:06:48.017 Completion Queue Entry Size 00:06:48.017 Max: 16 00:06:48.017 Min: 16 00:06:48.017 Number of Namespaces: 256 00:06:48.017 Compare Command: Supported 00:06:48.017 Write Uncorrectable Command: Not Supported 00:06:48.017 Dataset Management Command: Supported 00:06:48.017 Write Zeroes Command: Supported 00:06:48.017 Set Features Save Field: Supported 00:06:48.017 Reservations: Not Supported 00:06:48.017 Timestamp: Supported 00:06:48.017 Copy: Supported 00:06:48.017 Volatile Write Cache: Present 00:06:48.017 Atomic Write Unit (Normal): 1 00:06:48.017 Atomic Write Unit (PFail): 1 00:06:48.017 Atomic Compare & Write Unit: 1 00:06:48.017 Fused Compare & Write: Not Supported 00:06:48.017 Scatter-Gather List 00:06:48.017 SGL Command Set: Supported 00:06:48.017 SGL Keyed: Not Supported 00:06:48.017 SGL Bit Bucket Descriptor: Not Supported 00:06:48.017 SGL Metadata Pointer: Not Supported 00:06:48.017 Oversized SGL: Not Supported 00:06:48.017 SGL Metadata Address: Not Supported 00:06:48.017 SGL Offset: Not Supported 00:06:48.017 Transport SGL Data Block: Not Supported 00:06:48.017 Replay Protected Memory Block: Not Supported 00:06:48.017 00:06:48.017 Firmware Slot Information 00:06:48.017 ========================= 00:06:48.017 Active slot: 1 00:06:48.017 Slot 1 Firmware Revision: 1.0 00:06:48.017 00:06:48.017 00:06:48.017 Commands Supported and Effects 00:06:48.017 ============================== 00:06:48.017 Admin Commands 00:06:48.017 -------------- 00:06:48.017 Delete I/O Submission Queue (00h): Supported 00:06:48.017 Create I/O Submission Queue (01h): Supported 00:06:48.017 Get Log Page (02h): Supported 00:06:48.017 Delete I/O Completion Queue (04h): Supported 00:06:48.017 Create I/O Completion Queue (05h): Supported 00:06:48.017 Identify (06h): Supported 00:06:48.017 Abort (08h): Supported 00:06:48.017 Set Features (09h): Supported 00:06:48.017 Get Features (0Ah): Supported 00:06:48.017 Asynchronous Event Request (0Ch): Supported 00:06:48.017 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.017 Directive Send (19h): Supported 00:06:48.017 Directive Receive (1Ah): Supported 00:06:48.017 Virtualization Management (1Ch): Supported 00:06:48.017 Doorbell Buffer Config (7Ch): Supported 00:06:48.017 Format NVM (80h): Supported LBA-Change 00:06:48.017 I/O Commands 00:06:48.017 ------------ 00:06:48.017 Flush (00h): Supported LBA-Change 00:06:48.017 Write (01h): Supported LBA-Change 00:06:48.017 Read (02h): Supported 00:06:48.017 Compare (05h): Supported 00:06:48.017 Write Zeroes (08h): Supported LBA-Change 00:06:48.017 Dataset Management (09h): Supported LBA-Change 00:06:48.017 Unknown (0Ch): Supported 00:06:48.017 Unknown (12h): Supported 00:06:48.017 Copy (19h): Supported LBA-Change 00:06:48.017 Unknown (1Dh): Supported LBA-Change 00:06:48.017 00:06:48.017 Error Log 00:06:48.017 ========= 00:06:48.017 00:06:48.017 Arbitration 00:06:48.017 =========== 00:06:48.017 Arbitration Burst: no limit 00:06:48.017 00:06:48.017 Power Management 00:06:48.017 ================ 00:06:48.017 Number of Power States: 1 00:06:48.017 Current Power State: Power State #0 00:06:48.017 Power State #0: 00:06:48.017 Max Power: 25.00 W 00:06:48.017 Non-Operational State: Operational 00:06:48.017 Entry Latency: 16 microseconds 00:06:48.017 Exit Latency: 4 microseconds 00:06:48.017 Relative Read Throughput: 0 00:06:48.017 Relative Read Latency: 0 00:06:48.017 Relative Write Throughput: 0 00:06:48.017 Relative Write Latency: 0 00:06:48.017 Idle Power: Not Reported 00:06:48.017 Active Power: Not Reported 00:06:48.017 Non-Operational Permissive Mode: Not Supported 00:06:48.017 00:06:48.017 Health Information 00:06:48.017 ================== 00:06:48.017 Critical Warnings: 00:06:48.017 Available Spare Space: OK 00:06:48.017 Temperature: [2024-11-17 01:26:56.396929] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62822 terminated unexpected 00:06:48.017 OK 00:06:48.017 Device Reliability: OK 00:06:48.017 Read Only: No 00:06:48.017 Volatile Memory Backup: OK 00:06:48.017 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.017 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.017 Available Spare: 0% 00:06:48.017 Available Spare Threshold: 0% 00:06:48.017 Life Percentage Used: 0% 00:06:48.017 Data Units Read: 1129 00:06:48.017 Data Units Written: 993 00:06:48.017 Host Read Commands: 56728 00:06:48.017 Host Write Commands: 55464 00:06:48.017 Controller Busy Time: 0 minutes 00:06:48.017 Power Cycles: 0 00:06:48.017 Power On Hours: 0 hours 00:06:48.017 Unsafe Shutdowns: 0 00:06:48.017 Unrecoverable Media Errors: 0 00:06:48.017 Lifetime Error Log Entries: 0 00:06:48.017 Warning Temperature Time: 0 minutes 00:06:48.017 Critical Temperature Time: 0 minutes 00:06:48.017 00:06:48.017 Number of Queues 00:06:48.017 ================ 00:06:48.017 Number of I/O Submission Queues: 64 00:06:48.017 Number of I/O Completion Queues: 64 00:06:48.017 00:06:48.017 ZNS Specific Controller Data 00:06:48.017 ============================ 00:06:48.017 Zone Append Size Limit: 0 00:06:48.017 00:06:48.017 00:06:48.017 Active Namespaces 00:06:48.017 ================= 00:06:48.017 Namespace ID:1 00:06:48.017 Error Recovery Timeout: Unlimited 00:06:48.017 Command Set Identifier: NVM (00h) 00:06:48.017 Deallocate: Supported 00:06:48.017 Deallocated/Unwritten Error: Supported 00:06:48.017 Deallocated Read Value: All 0x00 00:06:48.017 Deallocate in Write Zeroes: Not Supported 00:06:48.017 Deallocated Guard Field: 0xFFFF 00:06:48.017 Flush: Supported 00:06:48.017 Reservation: Not Supported 00:06:48.017 Namespace Sharing Capabilities: Private 00:06:48.017 Size (in LBAs): 1310720 (5GiB) 00:06:48.017 Capacity (in LBAs): 1310720 (5GiB) 00:06:48.017 Utilization (in LBAs): 1310720 (5GiB) 00:06:48.017 Thin Provisioning: Not Supported 00:06:48.017 Per-NS Atomic Units: No 00:06:48.017 Maximum Single Source Range Length: 128 00:06:48.017 Maximum Copy Length: 128 00:06:48.017 Maximum Source Range Count: 128 00:06:48.017 NGUID/EUI64 Never Reused: No 00:06:48.017 Namespace Write Protected: No 00:06:48.017 Number of LBA Formats: 8 00:06:48.017 Current LBA Format: LBA Format #04 00:06:48.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.017 00:06:48.017 NVM Specific Namespace Data 00:06:48.017 =========================== 00:06:48.017 Logical Block Storage Tag Mask: 0 00:06:48.017 Protection Information Capabilities: 00:06:48.017 16b Guard Protection Information Storage Tag Support: No 00:06:48.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.017 Storage Tag Check Read Support: No 00:06:48.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.018 ===================================================== 00:06:48.018 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:48.018 ===================================================== 00:06:48.018 Controller Capabilities/Features 00:06:48.018 ================================ 00:06:48.018 Vendor ID: 1b36 00:06:48.018 Subsystem Vendor ID: 1af4 00:06:48.018 Serial Number: 12343 00:06:48.018 Model Number: QEMU NVMe Ctrl 00:06:48.018 Firmware Version: 8.0.0 00:06:48.018 Recommended Arb Burst: 6 00:06:48.018 IEEE OUI Identifier: 00 54 52 00:06:48.018 Multi-path I/O 00:06:48.018 May have multiple subsystem ports: No 00:06:48.018 May have multiple controllers: Yes 00:06:48.018 Associated with SR-IOV VF: No 00:06:48.018 Max Data Transfer Size: 524288 00:06:48.018 Max Number of Namespaces: 256 00:06:48.018 Max Number of I/O Queues: 64 00:06:48.018 NVMe Specification Version (VS): 1.4 00:06:48.018 NVMe Specification Version (Identify): 1.4 00:06:48.018 Maximum Queue Entries: 2048 00:06:48.018 Contiguous Queues Required: Yes 00:06:48.018 Arbitration Mechanisms Supported 00:06:48.018 Weighted Round Robin: Not Supported 00:06:48.018 Vendor Specific: Not Supported 00:06:48.018 Reset Timeout: 7500 ms 00:06:48.018 Doorbell Stride: 4 bytes 00:06:48.018 NVM Subsystem Reset: Not Supported 00:06:48.018 Command Sets Supported 00:06:48.018 NVM Command Set: Supported 00:06:48.018 Boot Partition: Not Supported 00:06:48.018 Memory Page Size Minimum: 4096 bytes 00:06:48.018 Memory Page Size Maximum: 65536 bytes 00:06:48.018 Persistent Memory Region: Not Supported 00:06:48.018 Optional Asynchronous Events Supported 00:06:48.018 Namespace Attribute Notices: Supported 00:06:48.018 Firmware Activation Notices: Not Supported 00:06:48.018 ANA Change Notices: Not Supported 00:06:48.018 PLE Aggregate Log Change Notices: Not Supported 00:06:48.018 LBA Status Info Alert Notices: Not Supported 00:06:48.018 EGE Aggregate Log Change Notices: Not Supported 00:06:48.018 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.018 Zone Descriptor Change Notices: Not Supported 00:06:48.018 Discovery Log Change Notices: Not Supported 00:06:48.018 Controller Attributes 00:06:48.018 128-bit Host Identifier: Not Supported 00:06:48.018 Non-Operational Permissive Mode: Not Supported 00:06:48.018 NVM Sets: Not Supported 00:06:48.018 Read Recovery Levels: Not Supported 00:06:48.018 Endurance Groups: Supported 00:06:48.018 Predictable Latency Mode: Not Supported 00:06:48.018 Traffic Based Keep ALive: Not Supported 00:06:48.018 Namespace Granularity: Not Supported 00:06:48.018 SQ Associations: Not Supported 00:06:48.018 UUID List: Not Supported 00:06:48.018 Multi-Domain Subsystem: Not Supported 00:06:48.018 Fixed Capacity Management: Not Supported 00:06:48.018 Variable Capacity Management: Not Supported 00:06:48.018 Delete Endurance Group: Not Supported 00:06:48.018 Delete NVM Set: Not Supported 00:06:48.018 Extended LBA Formats Supported: Supported 00:06:48.018 Flexible Data Placement Supported: Supported 00:06:48.018 00:06:48.018 Controller Memory Buffer Support 00:06:48.018 ================================ 00:06:48.018 Supported: No 00:06:48.018 00:06:48.018 Persistent Memory Region Support 00:06:48.018 ================================ 00:06:48.018 Supported: No 00:06:48.018 00:06:48.018 Admin Command Set Attributes 00:06:48.018 ============================ 00:06:48.018 Security Send/Receive: Not Supported 00:06:48.018 Format NVM: Supported 00:06:48.018 Firmware Activate/Download: Not Supported 00:06:48.018 Namespace Management: Supported 00:06:48.018 Device Self-Test: Not Supported 00:06:48.018 Directives: Supported 00:06:48.018 NVMe-MI: Not Supported 00:06:48.018 Virtualization Management: Not Supported 00:06:48.018 Doorbell Buffer Config: Supported 00:06:48.018 Get LBA Status Capability: Not Supported 00:06:48.018 Command & Feature Lockdown Capability: Not Supported 00:06:48.018 Abort Command Limit: 4 00:06:48.018 Async Event Request Limit: 4 00:06:48.018 Number of Firmware Slots: N/A 00:06:48.018 Firmware Slot 1 Read-Only: N/A 00:06:48.018 Firmware Activation Without Reset: N/A 00:06:48.018 Multiple Update Detection Support: N/A 00:06:48.018 Firmware Update Granularity: No Information Provided 00:06:48.018 Per-Namespace SMART Log: Yes 00:06:48.018 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.018 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:48.018 Command Effects Log Page: Supported 00:06:48.018 Get Log Page Extended Data: Supported 00:06:48.018 Telemetry Log Pages: Not Supported 00:06:48.018 Persistent Event Log Pages: Not Supported 00:06:48.018 Supported Log Pages Log Page: May Support 00:06:48.018 Commands Supported & Effects Log Page: Not Supported 00:06:48.018 Feature Identifiers & Effects Log Page:May Support 00:06:48.018 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.018 Data Area 4 for Telemetry Log: Not Supported 00:06:48.018 Error Log Page Entries Supported: 1 00:06:48.018 Keep Alive: Not Supported 00:06:48.018 00:06:48.018 NVM Command Set Attributes 00:06:48.018 ========================== 00:06:48.018 Submission Queue Entry Size 00:06:48.018 Max: 64 00:06:48.018 Min: 64 00:06:48.018 Completion Queue Entry Size 00:06:48.018 Max: 16 00:06:48.018 Min: 16 00:06:48.018 Number of Namespaces: 256 00:06:48.018 Compare Command: Supported 00:06:48.018 Write Uncorrectable Command: Not Supported 00:06:48.018 Dataset Management Command: Supported 00:06:48.018 Write Zeroes Command: Supported 00:06:48.018 Set Features Save Field: Supported 00:06:48.018 Reservations: Not Supported 00:06:48.018 Timestamp: Supported 00:06:48.018 Copy: Supported 00:06:48.018 Volatile Write Cache: Present 00:06:48.018 Atomic Write Unit (Normal): 1 00:06:48.018 Atomic Write Unit (PFail): 1 00:06:48.018 Atomic Compare & Write Unit: 1 00:06:48.018 Fused Compare & Write: Not Supported 00:06:48.018 Scatter-Gather List 00:06:48.018 SGL Command Set: Supported 00:06:48.018 SGL Keyed: Not Supported 00:06:48.018 SGL Bit Bucket Descriptor: Not Supported 00:06:48.018 SGL Metadata Pointer: Not Supported 00:06:48.018 Oversized SGL: Not Supported 00:06:48.018 SGL Metadata Address: Not Supported 00:06:48.018 SGL Offset: Not Supported 00:06:48.018 Transport SGL Data Block: Not Supported 00:06:48.018 Replay Protected Memory Block: Not Supported 00:06:48.018 00:06:48.018 Firmware Slot Information 00:06:48.018 ========================= 00:06:48.018 Active slot: 1 00:06:48.018 Slot 1 Firmware Revision: 1.0 00:06:48.018 00:06:48.018 00:06:48.018 Commands Supported and Effects 00:06:48.018 ============================== 00:06:48.018 Admin Commands 00:06:48.018 -------------- 00:06:48.018 Delete I/O Submission Queue (00h): Supported 00:06:48.018 Create I/O Submission Queue (01h): Supported 00:06:48.018 Get Log Page (02h): Supported 00:06:48.018 Delete I/O Completion Queue (04h): Supported 00:06:48.018 Create I/O Completion Queue (05h): Supported 00:06:48.018 Identify (06h): Supported 00:06:48.018 Abort (08h): Supported 00:06:48.018 Set Features (09h): Supported 00:06:48.018 Get Features (0Ah): Supported 00:06:48.018 Asynchronous Event Request (0Ch): Supported 00:06:48.018 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.018 Directive Send (19h): Supported 00:06:48.018 Directive Receive (1Ah): Supported 00:06:48.018 Virtualization Management (1Ch): Supported 00:06:48.018 Doorbell Buffer Config (7Ch): Supported 00:06:48.018 Format NVM (80h): Supported LBA-Change 00:06:48.018 I/O Commands 00:06:48.018 ------------ 00:06:48.018 Flush (00h): Supported LBA-Change 00:06:48.018 Write (01h): Supported LBA-Change 00:06:48.018 Read (02h): Supported 00:06:48.018 Compare (05h): Supported 00:06:48.018 Write Zeroes (08h): Supported LBA-Change 00:06:48.018 Dataset Management (09h): Supported LBA-Change 00:06:48.018 Unknown (0Ch): Supported 00:06:48.018 Unknown (12h): Supported 00:06:48.018 Copy (19h): Supported LBA-Change 00:06:48.018 Unknown (1Dh): Supported LBA-Change 00:06:48.018 00:06:48.018 Error Log 00:06:48.018 ========= 00:06:48.018 00:06:48.018 Arbitration 00:06:48.018 =========== 00:06:48.018 Arbitration Burst: no limit 00:06:48.018 00:06:48.018 Power Management 00:06:48.019 ================ 00:06:48.019 Number of Power States: 1 00:06:48.019 Current Power State: Power State #0 00:06:48.019 Power State #0: 00:06:48.019 Max Power: 25.00 W 00:06:48.019 Non-Operational State: Operational 00:06:48.019 Entry Latency: 16 microseconds 00:06:48.019 Exit Latency: 4 microseconds 00:06:48.019 Relative Read Throughput: 0 00:06:48.019 Relative Read Latency: 0 00:06:48.019 Relative Write Throughput: 0 00:06:48.019 Relative Write Latency: 0 00:06:48.019 Idle Power: Not Reported 00:06:48.019 Active Power: Not Reported 00:06:48.019 Non-Operational Permissive Mode: Not Supported 00:06:48.019 00:06:48.019 Health Information 00:06:48.019 ================== 00:06:48.019 Critical Warnings: 00:06:48.019 Available Spare Space: OK 00:06:48.019 Temperature: OK 00:06:48.019 Device Reliability: OK 00:06:48.019 Read Only: No 00:06:48.019 Volatile Memory Backup: OK 00:06:48.019 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.019 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.019 Available Spare: 0% 00:06:48.019 Available Spare Threshold: 0% 00:06:48.019 Life Percentage Used: 0% 00:06:48.019 Data Units Read: 874 00:06:48.019 Data Units Written: 803 00:06:48.019 Host Read Commands: 39244 00:06:48.019 Host Write Commands: 38667 00:06:48.019 Controller Busy Time: 0 minutes 00:06:48.019 Power Cycles: 0 00:06:48.019 Power On Hours: 0 hours 00:06:48.019 Unsafe Shutdowns: 0 00:06:48.019 Unrecoverable Media Errors: 0 00:06:48.019 Lifetime Error Log Entries: 0 00:06:48.019 Warning Temperature Time: 0 minutes 00:06:48.019 Critical Temperature Time: 0 minutes 00:06:48.019 00:06:48.019 Number of Queues 00:06:48.019 ================ 00:06:48.019 Number of I/O Submission Queues: 64 00:06:48.019 Number of I/O Completion Queues: 64 00:06:48.019 00:06:48.019 ZNS Specific Controller Data 00:06:48.019 ============================ 00:06:48.019 Zone Append Size Limit: 0 00:06:48.019 00:06:48.019 00:06:48.019 Active Namespaces 00:06:48.019 ================= 00:06:48.019 Namespace ID:1 00:06:48.019 Error Recovery Timeout: Unlimited 00:06:48.019 Command Set Identifier: NVM (00h) 00:06:48.019 Deallocate: Supported 00:06:48.019 Deallocated/Unwritten Error: Supported 00:06:48.019 Deallocated Read Value: All 0x00 00:06:48.019 Deallocate in Write Zeroes: Not Supported 00:06:48.019 Deallocated Guard Field: 0xFFFF 00:06:48.019 Flush: Supported 00:06:48.019 Reservation: Not Supported 00:06:48.019 Namespace Sharing Capabilities: Multiple Controllers 00:06:48.019 Size (in LBAs): 262144 (1GiB) 00:06:48.019 Capacity (in LBAs): 262144 (1GiB) 00:06:48.019 Utilization (in LBAs): 262144 (1GiB) 00:06:48.019 Thin Provisioning: Not Supported 00:06:48.019 Per-NS Atomic Units: No 00:06:48.019 Maximum Single Source Range Length: 128 00:06:48.019 Maximum Copy Length: 128 00:06:48.019 Maximum Source Range Count: 128 00:06:48.019 NGUID/EUI64 Never Reused: No 00:06:48.019 Namespace Write Protected: No 00:06:48.019 Endurance group ID: 1 00:06:48.019 Number of LBA Formats: 8 00:06:48.019 Current LBA Format: LBA Format #04 00:06:48.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.019 00:06:48.019 Get Feature FDP: 00:06:48.019 ================ 00:06:48.019 Enabled: Yes 00:06:48.019 FDP configuration index: 0 00:06:48.019 00:06:48.019 FDP configurations log page 00:06:48.019 =========================== 00:06:48.019 Number of FDP configurations: 1 00:06:48.019 Version: 0 00:06:48.019 Size: 112 00:06:48.019 FDP Configuration Descriptor: 0 00:06:48.019 Descriptor Size: 96 00:06:48.019 Reclaim Group Identifier format: 2 00:06:48.019 FDP Volatile Write Cache: Not Present 00:06:48.019 FDP Configuration: Valid 00:06:48.019 Vendor Specific Size: 0 00:06:48.019 Number of Reclaim Groups: 2 00:06:48.019 Number of Recalim Unit Handles: 8 00:06:48.019 Max Placement Identifiers: 128 00:06:48.019 Number of Namespaces Suppprted: 256 00:06:48.019 Reclaim unit Nominal Size: 6000000 bytes 00:06:48.019 Estimated Reclaim Unit Time Limit: Not Reported 00:06:48.019 RUH Desc #000: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #001: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #002: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #003: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #004: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #005: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #006: RUH Type: Initially Isolated 00:06:48.019 RUH Desc #007: RUH Type: Initially Isolated 00:06:48.019 00:06:48.019 FDP reclaim unit handle usage log page 00:06:48.019 ====================================== 00:06:48.019 Number of Reclaim Unit Handles: 8 00:06:48.019 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:48.019 RUH Usage Desc #001: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #002: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #003: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #004: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #005: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #006: RUH Attributes: Unused 00:06:48.019 RUH Usage Desc #007: RUH Attributes: Unused 00:06:48.019 00:06:48.019 FDP statistics log page 00:06:48.019 ======================= 00:06:48.019 Host bytes with metadata written: 520658944 00:06:48.019 Medi[2024-11-17 01:26:56.399851] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62822 terminated unexpected 00:06:48.019 a bytes with metadata written: 520716288 00:06:48.019 Media bytes erased: 0 00:06:48.019 00:06:48.019 FDP events log page 00:06:48.019 =================== 00:06:48.019 Number of FDP events: 0 00:06:48.019 00:06:48.019 NVM Specific Namespace Data 00:06:48.019 =========================== 00:06:48.019 Logical Block Storage Tag Mask: 0 00:06:48.019 Protection Information Capabilities: 00:06:48.019 16b Guard Protection Information Storage Tag Support: No 00:06:48.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.019 Storage Tag Check Read Support: No 00:06:48.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.019 ===================================================== 00:06:48.019 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:48.019 ===================================================== 00:06:48.019 Controller Capabilities/Features 00:06:48.019 ================================ 00:06:48.019 Vendor ID: 1b36 00:06:48.019 Subsystem Vendor ID: 1af4 00:06:48.019 Serial Number: 12342 00:06:48.019 Model Number: QEMU NVMe Ctrl 00:06:48.019 Firmware Version: 8.0.0 00:06:48.019 Recommended Arb Burst: 6 00:06:48.019 IEEE OUI Identifier: 00 54 52 00:06:48.019 Multi-path I/O 00:06:48.019 May have multiple subsystem ports: No 00:06:48.019 May have multiple controllers: No 00:06:48.019 Associated with SR-IOV VF: No 00:06:48.019 Max Data Transfer Size: 524288 00:06:48.019 Max Number of Namespaces: 256 00:06:48.019 Max Number of I/O Queues: 64 00:06:48.019 NVMe Specification Version (VS): 1.4 00:06:48.019 NVMe Specification Version (Identify): 1.4 00:06:48.019 Maximum Queue Entries: 2048 00:06:48.019 Contiguous Queues Required: Yes 00:06:48.019 Arbitration Mechanisms Supported 00:06:48.019 Weighted Round Robin: Not Supported 00:06:48.019 Vendor Specific: Not Supported 00:06:48.019 Reset Timeout: 7500 ms 00:06:48.019 Doorbell Stride: 4 bytes 00:06:48.020 NVM Subsystem Reset: Not Supported 00:06:48.020 Command Sets Supported 00:06:48.020 NVM Command Set: Supported 00:06:48.020 Boot Partition: Not Supported 00:06:48.020 Memory Page Size Minimum: 4096 bytes 00:06:48.020 Memory Page Size Maximum: 65536 bytes 00:06:48.020 Persistent Memory Region: Not Supported 00:06:48.020 Optional Asynchronous Events Supported 00:06:48.020 Namespace Attribute Notices: Supported 00:06:48.020 Firmware Activation Notices: Not Supported 00:06:48.020 ANA Change Notices: Not Supported 00:06:48.020 PLE Aggregate Log Change Notices: Not Supported 00:06:48.020 LBA Status Info Alert Notices: Not Supported 00:06:48.020 EGE Aggregate Log Change Notices: Not Supported 00:06:48.020 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.020 Zone Descriptor Change Notices: Not Supported 00:06:48.020 Discovery Log Change Notices: Not Supported 00:06:48.020 Controller Attributes 00:06:48.020 128-bit Host Identifier: Not Supported 00:06:48.020 Non-Operational Permissive Mode: Not Supported 00:06:48.020 NVM Sets: Not Supported 00:06:48.020 Read Recovery Levels: Not Supported 00:06:48.020 Endurance Groups: Not Supported 00:06:48.020 Predictable Latency Mode: Not Supported 00:06:48.020 Traffic Based Keep ALive: Not Supported 00:06:48.020 Namespace Granularity: Not Supported 00:06:48.020 SQ Associations: Not Supported 00:06:48.020 UUID List: Not Supported 00:06:48.020 Multi-Domain Subsystem: Not Supported 00:06:48.020 Fixed Capacity Management: Not Supported 00:06:48.020 Variable Capacity Management: Not Supported 00:06:48.020 Delete Endurance Group: Not Supported 00:06:48.020 Delete NVM Set: Not Supported 00:06:48.020 Extended LBA Formats Supported: Supported 00:06:48.020 Flexible Data Placement Supported: Not Supported 00:06:48.020 00:06:48.020 Controller Memory Buffer Support 00:06:48.020 ================================ 00:06:48.020 Supported: No 00:06:48.020 00:06:48.020 Persistent Memory Region Support 00:06:48.020 ================================ 00:06:48.020 Supported: No 00:06:48.020 00:06:48.020 Admin Command Set Attributes 00:06:48.020 ============================ 00:06:48.020 Security Send/Receive: Not Supported 00:06:48.020 Format NVM: Supported 00:06:48.020 Firmware Activate/Download: Not Supported 00:06:48.020 Namespace Management: Supported 00:06:48.020 Device Self-Test: Not Supported 00:06:48.020 Directives: Supported 00:06:48.020 NVMe-MI: Not Supported 00:06:48.020 Virtualization Management: Not Supported 00:06:48.020 Doorbell Buffer Config: Supported 00:06:48.020 Get LBA Status Capability: Not Supported 00:06:48.020 Command & Feature Lockdown Capability: Not Supported 00:06:48.020 Abort Command Limit: 4 00:06:48.020 Async Event Request Limit: 4 00:06:48.020 Number of Firmware Slots: N/A 00:06:48.020 Firmware Slot 1 Read-Only: N/A 00:06:48.020 Firmware Activation Without Reset: N/A 00:06:48.020 Multiple Update Detection Support: N/A 00:06:48.020 Firmware Update Granularity: No Information Provided 00:06:48.020 Per-Namespace SMART Log: Yes 00:06:48.020 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.020 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:48.020 Command Effects Log Page: Supported 00:06:48.020 Get Log Page Extended Data: Supported 00:06:48.020 Telemetry Log Pages: Not Supported 00:06:48.020 Persistent Event Log Pages: Not Supported 00:06:48.020 Supported Log Pages Log Page: May Support 00:06:48.020 Commands Supported & Effects Log Page: Not Supported 00:06:48.020 Feature Identifiers & Effects Log Page:May Support 00:06:48.020 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.020 Data Area 4 for Telemetry Log: Not Supported 00:06:48.020 Error Log Page Entries Supported: 1 00:06:48.020 Keep Alive: Not Supported 00:06:48.020 00:06:48.020 NVM Command Set Attributes 00:06:48.020 ========================== 00:06:48.020 Submission Queue Entry Size 00:06:48.020 Max: 64 00:06:48.020 Min: 64 00:06:48.020 Completion Queue Entry Size 00:06:48.020 Max: 16 00:06:48.020 Min: 16 00:06:48.020 Number of Namespaces: 256 00:06:48.020 Compare Command: Supported 00:06:48.020 Write Uncorrectable Command: Not Supported 00:06:48.020 Dataset Management Command: Supported 00:06:48.020 Write Zeroes Command: Supported 00:06:48.020 Set Features Save Field: Supported 00:06:48.020 Reservations: Not Supported 00:06:48.020 Timestamp: Supported 00:06:48.020 Copy: Supported 00:06:48.020 Volatile Write Cache: Present 00:06:48.020 Atomic Write Unit (Normal): 1 00:06:48.020 Atomic Write Unit (PFail): 1 00:06:48.020 Atomic Compare & Write Unit: 1 00:06:48.020 Fused Compare & Write: Not Supported 00:06:48.020 Scatter-Gather List 00:06:48.020 SGL Command Set: Supported 00:06:48.020 SGL Keyed: Not Supported 00:06:48.020 SGL Bit Bucket Descriptor: Not Supported 00:06:48.020 SGL Metadata Pointer: Not Supported 00:06:48.020 Oversized SGL: Not Supported 00:06:48.020 SGL Metadata Address: Not Supported 00:06:48.020 SGL Offset: Not Supported 00:06:48.020 Transport SGL Data Block: Not Supported 00:06:48.020 Replay Protected Memory Block: Not Supported 00:06:48.020 00:06:48.020 Firmware Slot Information 00:06:48.020 ========================= 00:06:48.020 Active slot: 1 00:06:48.020 Slot 1 Firmware Revision: 1.0 00:06:48.020 00:06:48.020 00:06:48.020 Commands Supported and Effects 00:06:48.020 ============================== 00:06:48.020 Admin Commands 00:06:48.020 -------------- 00:06:48.020 Delete I/O Submission Queue (00h): Supported 00:06:48.020 Create I/O Submission Queue (01h): Supported 00:06:48.020 Get Log Page (02h): Supported 00:06:48.020 Delete I/O Completion Queue (04h): Supported 00:06:48.020 Create I/O Completion Queue (05h): Supported 00:06:48.020 Identify (06h): Supported 00:06:48.020 Abort (08h): Supported 00:06:48.020 Set Features (09h): Supported 00:06:48.020 Get Features (0Ah): Supported 00:06:48.020 Asynchronous Event Request (0Ch): Supported 00:06:48.020 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.020 Directive Send (19h): Supported 00:06:48.020 Directive Receive (1Ah): Supported 00:06:48.020 Virtualization Management (1Ch): Supported 00:06:48.020 Doorbell Buffer Config (7Ch): Supported 00:06:48.020 Format NVM (80h): Supported LBA-Change 00:06:48.020 I/O Commands 00:06:48.020 ------------ 00:06:48.020 Flush (00h): Supported LBA-Change 00:06:48.020 Write (01h): Supported LBA-Change 00:06:48.020 Read (02h): Supported 00:06:48.020 Compare (05h): Supported 00:06:48.020 Write Zeroes (08h): Supported LBA-Change 00:06:48.020 Dataset Management (09h): Supported LBA-Change 00:06:48.020 Unknown (0Ch): Supported 00:06:48.020 Unknown (12h): Supported 00:06:48.020 Copy (19h): Supported LBA-Change 00:06:48.020 Unknown (1Dh): Supported LBA-Change 00:06:48.020 00:06:48.020 Error Log 00:06:48.020 ========= 00:06:48.020 00:06:48.020 Arbitration 00:06:48.020 =========== 00:06:48.020 Arbitration Burst: no limit 00:06:48.020 00:06:48.020 Power Management 00:06:48.020 ================ 00:06:48.020 Number of Power States: 1 00:06:48.020 Current Power State: Power State #0 00:06:48.020 Power State #0: 00:06:48.020 Max Power: 25.00 W 00:06:48.020 Non-Operational State: Operational 00:06:48.020 Entry Latency: 16 microseconds 00:06:48.020 Exit Latency: 4 microseconds 00:06:48.020 Relative Read Throughput: 0 00:06:48.020 Relative Read Latency: 0 00:06:48.020 Relative Write Throughput: 0 00:06:48.020 Relative Write Latency: 0 00:06:48.020 Idle Power: Not Reported 00:06:48.020 Active Power: Not Reported 00:06:48.020 Non-Operational Permissive Mode: Not Supported 00:06:48.020 00:06:48.020 Health Information 00:06:48.020 ================== 00:06:48.020 Critical Warnings: 00:06:48.020 Available Spare Space: OK 00:06:48.020 Temperature: OK 00:06:48.020 Device Reliability: OK 00:06:48.020 Read Only: No 00:06:48.020 Volatile Memory Backup: OK 00:06:48.021 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.021 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.021 Available Spare: 0% 00:06:48.021 Available Spare Threshold: 0% 00:06:48.021 Life Percentage Used: 0% 00:06:48.021 Data Units Read: 2194 00:06:48.021 Data Units Written: 1981 00:06:48.021 Host Read Commands: 114098 00:06:48.021 Host Write Commands: 112367 00:06:48.021 Controller Busy Time: 0 minutes 00:06:48.021 Power Cycles: 0 00:06:48.021 Power On Hours: 0 hours 00:06:48.021 Unsafe Shutdowns: 0 00:06:48.021 Unrecoverable Media Errors: 0 00:06:48.021 Lifetime Error Log Entries: 0 00:06:48.021 Warning Temperature Time: 0 minutes 00:06:48.021 Critical Temperature Time: 0 minutes 00:06:48.021 00:06:48.021 Number of Queues 00:06:48.021 ================ 00:06:48.021 Number of I/O Submission Queues: 64 00:06:48.021 Number of I/O Completion Queues: 64 00:06:48.021 00:06:48.021 ZNS Specific Controller Data 00:06:48.021 ============================ 00:06:48.021 Zone Append Size Limit: 0 00:06:48.021 00:06:48.021 00:06:48.021 Active Namespaces 00:06:48.021 ================= 00:06:48.021 Namespace ID:1 00:06:48.021 Error Recovery Timeout: Unlimited 00:06:48.021 Command Set Identifier: NVM (00h) 00:06:48.021 Deallocate: Supported 00:06:48.021 Deallocated/Unwritten Error: Supported 00:06:48.021 Deallocated Read Value: All 0x00 00:06:48.021 Deallocate in Write Zeroes: Not Supported 00:06:48.021 Deallocated Guard Field: 0xFFFF 00:06:48.021 Flush: Supported 00:06:48.021 Reservation: Not Supported 00:06:48.021 Namespace Sharing Capabilities: Private 00:06:48.021 Size (in LBAs): 1048576 (4GiB) 00:06:48.021 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.021 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.021 Thin Provisioning: Not Supported 00:06:48.021 Per-NS Atomic Units: No 00:06:48.021 Maximum Single Source Range Length: 128 00:06:48.021 Maximum Copy Length: 128 00:06:48.021 Maximum Source Range Count: 128 00:06:48.021 NGUID/EUI64 Never Reused: No 00:06:48.021 Namespace Write Protected: No 00:06:48.021 Number of LBA Formats: 8 00:06:48.021 Current LBA Format: LBA Format #04 00:06:48.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.021 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.021 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.021 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.021 00:06:48.021 NVM Specific Namespace Data 00:06:48.021 =========================== 00:06:48.021 Logical Block Storage Tag Mask: 0 00:06:48.021 Protection Information Capabilities: 00:06:48.021 16b Guard Protection Information Storage Tag Support: No 00:06:48.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.021 Storage Tag Check Read Support: No 00:06:48.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Namespace ID:2 00:06:48.021 Error Recovery Timeout: Unlimited 00:06:48.021 Command Set Identifier: NVM (00h) 00:06:48.021 Deallocate: Supported 00:06:48.021 Deallocated/Unwritten Error: Supported 00:06:48.021 Deallocated Read Value: All 0x00 00:06:48.021 Deallocate in Write Zeroes: Not Supported 00:06:48.021 Deallocated Guard Field: 0xFFFF 00:06:48.021 Flush: Supported 00:06:48.021 Reservation: Not Supported 00:06:48.021 Namespace Sharing Capabilities: Private 00:06:48.021 Size (in LBAs): 1048576 (4GiB) 00:06:48.021 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.021 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.021 Thin Provisioning: Not Supported 00:06:48.021 Per-NS Atomic Units: No 00:06:48.021 Maximum Single Source Range Length: 128 00:06:48.021 Maximum Copy Length: 128 00:06:48.021 Maximum Source Range Count: 128 00:06:48.021 NGUID/EUI64 Never Reused: No 00:06:48.021 Namespace Write Protected: No 00:06:48.021 Number of LBA Formats: 8 00:06:48.021 Current LBA Format: LBA Format #04 00:06:48.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.021 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.021 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.021 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.021 00:06:48.021 NVM Specific Namespace Data 00:06:48.021 =========================== 00:06:48.021 Logical Block Storage Tag Mask: 0 00:06:48.021 Protection Information Capabilities: 00:06:48.021 16b Guard Protection Information Storage Tag Support: No 00:06:48.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.021 Storage Tag Check Read Support: No 00:06:48.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Namespace ID:3 00:06:48.021 Error Recovery Timeout: Unlimited 00:06:48.021 Command Set Identifier: NVM (00h) 00:06:48.021 Deallocate: Supported 00:06:48.021 Deallocated/Unwritten Error: Supported 00:06:48.021 Deallocated Read Value: All 0x00 00:06:48.021 Deallocate in Write Zeroes: Not Supported 00:06:48.021 Deallocated Guard Field: 0xFFFF 00:06:48.021 Flush: Supported 00:06:48.021 Reservation: Not Supported 00:06:48.021 Namespace Sharing Capabilities: Private 00:06:48.021 Size (in LBAs): 1048576 (4GiB) 00:06:48.021 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.021 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.021 Thin Provisioning: Not Supported 00:06:48.021 Per-NS Atomic Units: No 00:06:48.021 Maximum Single Source Range Length: 128 00:06:48.021 Maximum Copy Length: 128 00:06:48.021 Maximum Source Range Count: 128 00:06:48.021 NGUID/EUI64 Never Reused: No 00:06:48.021 Namespace Write Protected: No 00:06:48.021 Number of LBA Formats: 8 00:06:48.021 Current LBA Format: LBA Format #04 00:06:48.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.021 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.021 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.021 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.021 00:06:48.021 NVM Specific Namespace Data 00:06:48.021 =========================== 00:06:48.021 Logical Block Storage Tag Mask: 0 00:06:48.021 Protection Information Capabilities: 00:06:48.021 16b Guard Protection Information Storage Tag Support: No 00:06:48.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.021 Storage Tag Check Read Support: No 00:06:48.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.021 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:48.021 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:06:48.281 ===================================================== 00:06:48.281 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:48.281 ===================================================== 00:06:48.281 Controller Capabilities/Features 00:06:48.281 ================================ 00:06:48.281 Vendor ID: 1b36 00:06:48.281 Subsystem Vendor ID: 1af4 00:06:48.281 Serial Number: 12340 00:06:48.281 Model Number: QEMU NVMe Ctrl 00:06:48.281 Firmware Version: 8.0.0 00:06:48.281 Recommended Arb Burst: 6 00:06:48.281 IEEE OUI Identifier: 00 54 52 00:06:48.281 Multi-path I/O 00:06:48.281 May have multiple subsystem ports: No 00:06:48.281 May have multiple controllers: No 00:06:48.281 Associated with SR-IOV VF: No 00:06:48.281 Max Data Transfer Size: 524288 00:06:48.281 Max Number of Namespaces: 256 00:06:48.281 Max Number of I/O Queues: 64 00:06:48.281 NVMe Specification Version (VS): 1.4 00:06:48.281 NVMe Specification Version (Identify): 1.4 00:06:48.281 Maximum Queue Entries: 2048 00:06:48.281 Contiguous Queues Required: Yes 00:06:48.281 Arbitration Mechanisms Supported 00:06:48.281 Weighted Round Robin: Not Supported 00:06:48.281 Vendor Specific: Not Supported 00:06:48.281 Reset Timeout: 7500 ms 00:06:48.281 Doorbell Stride: 4 bytes 00:06:48.281 NVM Subsystem Reset: Not Supported 00:06:48.281 Command Sets Supported 00:06:48.281 NVM Command Set: Supported 00:06:48.281 Boot Partition: Not Supported 00:06:48.281 Memory Page Size Minimum: 4096 bytes 00:06:48.281 Memory Page Size Maximum: 65536 bytes 00:06:48.281 Persistent Memory Region: Not Supported 00:06:48.281 Optional Asynchronous Events Supported 00:06:48.281 Namespace Attribute Notices: Supported 00:06:48.281 Firmware Activation Notices: Not Supported 00:06:48.281 ANA Change Notices: Not Supported 00:06:48.281 PLE Aggregate Log Change Notices: Not Supported 00:06:48.281 LBA Status Info Alert Notices: Not Supported 00:06:48.281 EGE Aggregate Log Change Notices: Not Supported 00:06:48.281 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.281 Zone Descriptor Change Notices: Not Supported 00:06:48.281 Discovery Log Change Notices: Not Supported 00:06:48.281 Controller Attributes 00:06:48.281 128-bit Host Identifier: Not Supported 00:06:48.281 Non-Operational Permissive Mode: Not Supported 00:06:48.281 NVM Sets: Not Supported 00:06:48.281 Read Recovery Levels: Not Supported 00:06:48.281 Endurance Groups: Not Supported 00:06:48.281 Predictable Latency Mode: Not Supported 00:06:48.281 Traffic Based Keep ALive: Not Supported 00:06:48.281 Namespace Granularity: Not Supported 00:06:48.281 SQ Associations: Not Supported 00:06:48.281 UUID List: Not Supported 00:06:48.281 Multi-Domain Subsystem: Not Supported 00:06:48.281 Fixed Capacity Management: Not Supported 00:06:48.281 Variable Capacity Management: Not Supported 00:06:48.281 Delete Endurance Group: Not Supported 00:06:48.281 Delete NVM Set: Not Supported 00:06:48.281 Extended LBA Formats Supported: Supported 00:06:48.281 Flexible Data Placement Supported: Not Supported 00:06:48.281 00:06:48.281 Controller Memory Buffer Support 00:06:48.281 ================================ 00:06:48.281 Supported: No 00:06:48.281 00:06:48.281 Persistent Memory Region Support 00:06:48.281 ================================ 00:06:48.281 Supported: No 00:06:48.281 00:06:48.281 Admin Command Set Attributes 00:06:48.281 ============================ 00:06:48.281 Security Send/Receive: Not Supported 00:06:48.281 Format NVM: Supported 00:06:48.281 Firmware Activate/Download: Not Supported 00:06:48.281 Namespace Management: Supported 00:06:48.281 Device Self-Test: Not Supported 00:06:48.281 Directives: Supported 00:06:48.281 NVMe-MI: Not Supported 00:06:48.281 Virtualization Management: Not Supported 00:06:48.281 Doorbell Buffer Config: Supported 00:06:48.281 Get LBA Status Capability: Not Supported 00:06:48.281 Command & Feature Lockdown Capability: Not Supported 00:06:48.281 Abort Command Limit: 4 00:06:48.281 Async Event Request Limit: 4 00:06:48.281 Number of Firmware Slots: N/A 00:06:48.281 Firmware Slot 1 Read-Only: N/A 00:06:48.281 Firmware Activation Without Reset: N/A 00:06:48.281 Multiple Update Detection Support: N/A 00:06:48.281 Firmware Update Granularity: No Information Provided 00:06:48.281 Per-Namespace SMART Log: Yes 00:06:48.281 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.281 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:48.282 Command Effects Log Page: Supported 00:06:48.282 Get Log Page Extended Data: Supported 00:06:48.282 Telemetry Log Pages: Not Supported 00:06:48.282 Persistent Event Log Pages: Not Supported 00:06:48.282 Supported Log Pages Log Page: May Support 00:06:48.282 Commands Supported & Effects Log Page: Not Supported 00:06:48.282 Feature Identifiers & Effects Log Page:May Support 00:06:48.282 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.282 Data Area 4 for Telemetry Log: Not Supported 00:06:48.282 Error Log Page Entries Supported: 1 00:06:48.282 Keep Alive: Not Supported 00:06:48.282 00:06:48.282 NVM Command Set Attributes 00:06:48.282 ========================== 00:06:48.282 Submission Queue Entry Size 00:06:48.282 Max: 64 00:06:48.282 Min: 64 00:06:48.282 Completion Queue Entry Size 00:06:48.282 Max: 16 00:06:48.282 Min: 16 00:06:48.282 Number of Namespaces: 256 00:06:48.282 Compare Command: Supported 00:06:48.282 Write Uncorrectable Command: Not Supported 00:06:48.282 Dataset Management Command: Supported 00:06:48.282 Write Zeroes Command: Supported 00:06:48.282 Set Features Save Field: Supported 00:06:48.282 Reservations: Not Supported 00:06:48.282 Timestamp: Supported 00:06:48.282 Copy: Supported 00:06:48.282 Volatile Write Cache: Present 00:06:48.282 Atomic Write Unit (Normal): 1 00:06:48.282 Atomic Write Unit (PFail): 1 00:06:48.282 Atomic Compare & Write Unit: 1 00:06:48.282 Fused Compare & Write: Not Supported 00:06:48.282 Scatter-Gather List 00:06:48.282 SGL Command Set: Supported 00:06:48.282 SGL Keyed: Not Supported 00:06:48.282 SGL Bit Bucket Descriptor: Not Supported 00:06:48.282 SGL Metadata Pointer: Not Supported 00:06:48.282 Oversized SGL: Not Supported 00:06:48.282 SGL Metadata Address: Not Supported 00:06:48.282 SGL Offset: Not Supported 00:06:48.282 Transport SGL Data Block: Not Supported 00:06:48.282 Replay Protected Memory Block: Not Supported 00:06:48.282 00:06:48.282 Firmware Slot Information 00:06:48.282 ========================= 00:06:48.282 Active slot: 1 00:06:48.282 Slot 1 Firmware Revision: 1.0 00:06:48.282 00:06:48.282 00:06:48.282 Commands Supported and Effects 00:06:48.282 ============================== 00:06:48.282 Admin Commands 00:06:48.282 -------------- 00:06:48.282 Delete I/O Submission Queue (00h): Supported 00:06:48.282 Create I/O Submission Queue (01h): Supported 00:06:48.282 Get Log Page (02h): Supported 00:06:48.282 Delete I/O Completion Queue (04h): Supported 00:06:48.282 Create I/O Completion Queue (05h): Supported 00:06:48.282 Identify (06h): Supported 00:06:48.282 Abort (08h): Supported 00:06:48.282 Set Features (09h): Supported 00:06:48.282 Get Features (0Ah): Supported 00:06:48.282 Asynchronous Event Request (0Ch): Supported 00:06:48.282 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.282 Directive Send (19h): Supported 00:06:48.282 Directive Receive (1Ah): Supported 00:06:48.282 Virtualization Management (1Ch): Supported 00:06:48.282 Doorbell Buffer Config (7Ch): Supported 00:06:48.282 Format NVM (80h): Supported LBA-Change 00:06:48.282 I/O Commands 00:06:48.282 ------------ 00:06:48.282 Flush (00h): Supported LBA-Change 00:06:48.282 Write (01h): Supported LBA-Change 00:06:48.282 Read (02h): Supported 00:06:48.282 Compare (05h): Supported 00:06:48.282 Write Zeroes (08h): Supported LBA-Change 00:06:48.282 Dataset Management (09h): Supported LBA-Change 00:06:48.282 Unknown (0Ch): Supported 00:06:48.282 Unknown (12h): Supported 00:06:48.282 Copy (19h): Supported LBA-Change 00:06:48.282 Unknown (1Dh): Supported LBA-Change 00:06:48.282 00:06:48.282 Error Log 00:06:48.282 ========= 00:06:48.282 00:06:48.282 Arbitration 00:06:48.282 =========== 00:06:48.282 Arbitration Burst: no limit 00:06:48.282 00:06:48.282 Power Management 00:06:48.282 ================ 00:06:48.282 Number of Power States: 1 00:06:48.282 Current Power State: Power State #0 00:06:48.282 Power State #0: 00:06:48.282 Max Power: 25.00 W 00:06:48.282 Non-Operational State: Operational 00:06:48.282 Entry Latency: 16 microseconds 00:06:48.282 Exit Latency: 4 microseconds 00:06:48.282 Relative Read Throughput: 0 00:06:48.282 Relative Read Latency: 0 00:06:48.282 Relative Write Throughput: 0 00:06:48.282 Relative Write Latency: 0 00:06:48.282 Idle Power: Not Reported 00:06:48.282 Active Power: Not Reported 00:06:48.282 Non-Operational Permissive Mode: Not Supported 00:06:48.282 00:06:48.282 Health Information 00:06:48.282 ================== 00:06:48.282 Critical Warnings: 00:06:48.282 Available Spare Space: OK 00:06:48.282 Temperature: OK 00:06:48.282 Device Reliability: OK 00:06:48.282 Read Only: No 00:06:48.282 Volatile Memory Backup: OK 00:06:48.282 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.282 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.282 Available Spare: 0% 00:06:48.282 Available Spare Threshold: 0% 00:06:48.282 Life Percentage Used: 0% 00:06:48.282 Data Units Read: 689 00:06:48.282 Data Units Written: 617 00:06:48.282 Host Read Commands: 37380 00:06:48.282 Host Write Commands: 37166 00:06:48.282 Controller Busy Time: 0 minutes 00:06:48.282 Power Cycles: 0 00:06:48.282 Power On Hours: 0 hours 00:06:48.282 Unsafe Shutdowns: 0 00:06:48.282 Unrecoverable Media Errors: 0 00:06:48.282 Lifetime Error Log Entries: 0 00:06:48.282 Warning Temperature Time: 0 minutes 00:06:48.282 Critical Temperature Time: 0 minutes 00:06:48.282 00:06:48.282 Number of Queues 00:06:48.282 ================ 00:06:48.282 Number of I/O Submission Queues: 64 00:06:48.282 Number of I/O Completion Queues: 64 00:06:48.282 00:06:48.282 ZNS Specific Controller Data 00:06:48.282 ============================ 00:06:48.282 Zone Append Size Limit: 0 00:06:48.282 00:06:48.282 00:06:48.282 Active Namespaces 00:06:48.282 ================= 00:06:48.282 Namespace ID:1 00:06:48.282 Error Recovery Timeout: Unlimited 00:06:48.282 Command Set Identifier: NVM (00h) 00:06:48.282 Deallocate: Supported 00:06:48.282 Deallocated/Unwritten Error: Supported 00:06:48.282 Deallocated Read Value: All 0x00 00:06:48.282 Deallocate in Write Zeroes: Not Supported 00:06:48.282 Deallocated Guard Field: 0xFFFF 00:06:48.282 Flush: Supported 00:06:48.282 Reservation: Not Supported 00:06:48.282 Metadata Transferred as: Separate Metadata Buffer 00:06:48.282 Namespace Sharing Capabilities: Private 00:06:48.282 Size (in LBAs): 1548666 (5GiB) 00:06:48.282 Capacity (in LBAs): 1548666 (5GiB) 00:06:48.282 Utilization (in LBAs): 1548666 (5GiB) 00:06:48.282 Thin Provisioning: Not Supported 00:06:48.282 Per-NS Atomic Units: No 00:06:48.282 Maximum Single Source Range Length: 128 00:06:48.282 Maximum Copy Length: 128 00:06:48.282 Maximum Source Range Count: 128 00:06:48.282 NGUID/EUI64 Never Reused: No 00:06:48.282 Namespace Write Protected: No 00:06:48.282 Number of LBA Formats: 8 00:06:48.282 Current LBA Format: LBA Format #07 00:06:48.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.282 00:06:48.282 NVM Specific Namespace Data 00:06:48.282 =========================== 00:06:48.282 Logical Block Storage Tag Mask: 0 00:06:48.282 Protection Information Capabilities: 00:06:48.282 16b Guard Protection Information Storage Tag Support: No 00:06:48.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.282 Storage Tag Check Read Support: No 00:06:48.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.282 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:48.282 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:06:48.541 ===================================================== 00:06:48.541 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:48.541 ===================================================== 00:06:48.541 Controller Capabilities/Features 00:06:48.541 ================================ 00:06:48.541 Vendor ID: 1b36 00:06:48.541 Subsystem Vendor ID: 1af4 00:06:48.541 Serial Number: 12341 00:06:48.541 Model Number: QEMU NVMe Ctrl 00:06:48.541 Firmware Version: 8.0.0 00:06:48.541 Recommended Arb Burst: 6 00:06:48.541 IEEE OUI Identifier: 00 54 52 00:06:48.541 Multi-path I/O 00:06:48.542 May have multiple subsystem ports: No 00:06:48.542 May have multiple controllers: No 00:06:48.542 Associated with SR-IOV VF: No 00:06:48.542 Max Data Transfer Size: 524288 00:06:48.542 Max Number of Namespaces: 256 00:06:48.542 Max Number of I/O Queues: 64 00:06:48.542 NVMe Specification Version (VS): 1.4 00:06:48.542 NVMe Specification Version (Identify): 1.4 00:06:48.542 Maximum Queue Entries: 2048 00:06:48.542 Contiguous Queues Required: Yes 00:06:48.542 Arbitration Mechanisms Supported 00:06:48.542 Weighted Round Robin: Not Supported 00:06:48.542 Vendor Specific: Not Supported 00:06:48.542 Reset Timeout: 7500 ms 00:06:48.542 Doorbell Stride: 4 bytes 00:06:48.542 NVM Subsystem Reset: Not Supported 00:06:48.542 Command Sets Supported 00:06:48.542 NVM Command Set: Supported 00:06:48.542 Boot Partition: Not Supported 00:06:48.542 Memory Page Size Minimum: 4096 bytes 00:06:48.542 Memory Page Size Maximum: 65536 bytes 00:06:48.542 Persistent Memory Region: Not Supported 00:06:48.542 Optional Asynchronous Events Supported 00:06:48.542 Namespace Attribute Notices: Supported 00:06:48.542 Firmware Activation Notices: Not Supported 00:06:48.542 ANA Change Notices: Not Supported 00:06:48.542 PLE Aggregate Log Change Notices: Not Supported 00:06:48.542 LBA Status Info Alert Notices: Not Supported 00:06:48.542 EGE Aggregate Log Change Notices: Not Supported 00:06:48.542 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.542 Zone Descriptor Change Notices: Not Supported 00:06:48.542 Discovery Log Change Notices: Not Supported 00:06:48.542 Controller Attributes 00:06:48.542 128-bit Host Identifier: Not Supported 00:06:48.542 Non-Operational Permissive Mode: Not Supported 00:06:48.542 NVM Sets: Not Supported 00:06:48.542 Read Recovery Levels: Not Supported 00:06:48.542 Endurance Groups: Not Supported 00:06:48.542 Predictable Latency Mode: Not Supported 00:06:48.542 Traffic Based Keep ALive: Not Supported 00:06:48.542 Namespace Granularity: Not Supported 00:06:48.542 SQ Associations: Not Supported 00:06:48.542 UUID List: Not Supported 00:06:48.542 Multi-Domain Subsystem: Not Supported 00:06:48.542 Fixed Capacity Management: Not Supported 00:06:48.542 Variable Capacity Management: Not Supported 00:06:48.542 Delete Endurance Group: Not Supported 00:06:48.542 Delete NVM Set: Not Supported 00:06:48.542 Extended LBA Formats Supported: Supported 00:06:48.542 Flexible Data Placement Supported: Not Supported 00:06:48.542 00:06:48.542 Controller Memory Buffer Support 00:06:48.542 ================================ 00:06:48.542 Supported: No 00:06:48.542 00:06:48.542 Persistent Memory Region Support 00:06:48.542 ================================ 00:06:48.542 Supported: No 00:06:48.542 00:06:48.542 Admin Command Set Attributes 00:06:48.542 ============================ 00:06:48.542 Security Send/Receive: Not Supported 00:06:48.542 Format NVM: Supported 00:06:48.542 Firmware Activate/Download: Not Supported 00:06:48.542 Namespace Management: Supported 00:06:48.542 Device Self-Test: Not Supported 00:06:48.542 Directives: Supported 00:06:48.542 NVMe-MI: Not Supported 00:06:48.542 Virtualization Management: Not Supported 00:06:48.542 Doorbell Buffer Config: Supported 00:06:48.542 Get LBA Status Capability: Not Supported 00:06:48.542 Command & Feature Lockdown Capability: Not Supported 00:06:48.542 Abort Command Limit: 4 00:06:48.542 Async Event Request Limit: 4 00:06:48.542 Number of Firmware Slots: N/A 00:06:48.542 Firmware Slot 1 Read-Only: N/A 00:06:48.542 Firmware Activation Without Reset: N/A 00:06:48.542 Multiple Update Detection Support: N/A 00:06:48.542 Firmware Update Granularity: No Information Provided 00:06:48.542 Per-Namespace SMART Log: Yes 00:06:48.542 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.542 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:48.542 Command Effects Log Page: Supported 00:06:48.542 Get Log Page Extended Data: Supported 00:06:48.542 Telemetry Log Pages: Not Supported 00:06:48.542 Persistent Event Log Pages: Not Supported 00:06:48.542 Supported Log Pages Log Page: May Support 00:06:48.542 Commands Supported & Effects Log Page: Not Supported 00:06:48.542 Feature Identifiers & Effects Log Page:May Support 00:06:48.542 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.542 Data Area 4 for Telemetry Log: Not Supported 00:06:48.542 Error Log Page Entries Supported: 1 00:06:48.542 Keep Alive: Not Supported 00:06:48.542 00:06:48.542 NVM Command Set Attributes 00:06:48.542 ========================== 00:06:48.542 Submission Queue Entry Size 00:06:48.542 Max: 64 00:06:48.542 Min: 64 00:06:48.542 Completion Queue Entry Size 00:06:48.542 Max: 16 00:06:48.542 Min: 16 00:06:48.542 Number of Namespaces: 256 00:06:48.542 Compare Command: Supported 00:06:48.542 Write Uncorrectable Command: Not Supported 00:06:48.542 Dataset Management Command: Supported 00:06:48.542 Write Zeroes Command: Supported 00:06:48.542 Set Features Save Field: Supported 00:06:48.542 Reservations: Not Supported 00:06:48.542 Timestamp: Supported 00:06:48.542 Copy: Supported 00:06:48.542 Volatile Write Cache: Present 00:06:48.542 Atomic Write Unit (Normal): 1 00:06:48.542 Atomic Write Unit (PFail): 1 00:06:48.542 Atomic Compare & Write Unit: 1 00:06:48.542 Fused Compare & Write: Not Supported 00:06:48.542 Scatter-Gather List 00:06:48.542 SGL Command Set: Supported 00:06:48.542 SGL Keyed: Not Supported 00:06:48.542 SGL Bit Bucket Descriptor: Not Supported 00:06:48.542 SGL Metadata Pointer: Not Supported 00:06:48.542 Oversized SGL: Not Supported 00:06:48.542 SGL Metadata Address: Not Supported 00:06:48.542 SGL Offset: Not Supported 00:06:48.542 Transport SGL Data Block: Not Supported 00:06:48.542 Replay Protected Memory Block: Not Supported 00:06:48.542 00:06:48.542 Firmware Slot Information 00:06:48.542 ========================= 00:06:48.542 Active slot: 1 00:06:48.542 Slot 1 Firmware Revision: 1.0 00:06:48.542 00:06:48.542 00:06:48.542 Commands Supported and Effects 00:06:48.542 ============================== 00:06:48.542 Admin Commands 00:06:48.542 -------------- 00:06:48.542 Delete I/O Submission Queue (00h): Supported 00:06:48.542 Create I/O Submission Queue (01h): Supported 00:06:48.542 Get Log Page (02h): Supported 00:06:48.542 Delete I/O Completion Queue (04h): Supported 00:06:48.542 Create I/O Completion Queue (05h): Supported 00:06:48.542 Identify (06h): Supported 00:06:48.542 Abort (08h): Supported 00:06:48.542 Set Features (09h): Supported 00:06:48.542 Get Features (0Ah): Supported 00:06:48.542 Asynchronous Event Request (0Ch): Supported 00:06:48.542 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.542 Directive Send (19h): Supported 00:06:48.542 Directive Receive (1Ah): Supported 00:06:48.542 Virtualization Management (1Ch): Supported 00:06:48.542 Doorbell Buffer Config (7Ch): Supported 00:06:48.542 Format NVM (80h): Supported LBA-Change 00:06:48.542 I/O Commands 00:06:48.542 ------------ 00:06:48.542 Flush (00h): Supported LBA-Change 00:06:48.542 Write (01h): Supported LBA-Change 00:06:48.542 Read (02h): Supported 00:06:48.542 Compare (05h): Supported 00:06:48.542 Write Zeroes (08h): Supported LBA-Change 00:06:48.542 Dataset Management (09h): Supported LBA-Change 00:06:48.542 Unknown (0Ch): Supported 00:06:48.542 Unknown (12h): Supported 00:06:48.542 Copy (19h): Supported LBA-Change 00:06:48.542 Unknown (1Dh): Supported LBA-Change 00:06:48.542 00:06:48.542 Error Log 00:06:48.542 ========= 00:06:48.542 00:06:48.542 Arbitration 00:06:48.542 =========== 00:06:48.542 Arbitration Burst: no limit 00:06:48.542 00:06:48.542 Power Management 00:06:48.542 ================ 00:06:48.542 Number of Power States: 1 00:06:48.542 Current Power State: Power State #0 00:06:48.542 Power State #0: 00:06:48.542 Max Power: 25.00 W 00:06:48.542 Non-Operational State: Operational 00:06:48.542 Entry Latency: 16 microseconds 00:06:48.542 Exit Latency: 4 microseconds 00:06:48.542 Relative Read Throughput: 0 00:06:48.542 Relative Read Latency: 0 00:06:48.542 Relative Write Throughput: 0 00:06:48.542 Relative Write Latency: 0 00:06:48.542 Idle Power: Not Reported 00:06:48.542 Active Power: Not Reported 00:06:48.542 Non-Operational Permissive Mode: Not Supported 00:06:48.542 00:06:48.542 Health Information 00:06:48.542 ================== 00:06:48.542 Critical Warnings: 00:06:48.542 Available Spare Space: OK 00:06:48.542 Temperature: OK 00:06:48.542 Device Reliability: OK 00:06:48.542 Read Only: No 00:06:48.542 Volatile Memory Backup: OK 00:06:48.543 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.543 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.543 Available Spare: 0% 00:06:48.543 Available Spare Threshold: 0% 00:06:48.543 Life Percentage Used: 0% 00:06:48.543 Data Units Read: 1129 00:06:48.543 Data Units Written: 993 00:06:48.543 Host Read Commands: 56728 00:06:48.543 Host Write Commands: 55464 00:06:48.543 Controller Busy Time: 0 minutes 00:06:48.543 Power Cycles: 0 00:06:48.543 Power On Hours: 0 hours 00:06:48.543 Unsafe Shutdowns: 0 00:06:48.543 Unrecoverable Media Errors: 0 00:06:48.543 Lifetime Error Log Entries: 0 00:06:48.543 Warning Temperature Time: 0 minutes 00:06:48.543 Critical Temperature Time: 0 minutes 00:06:48.543 00:06:48.543 Number of Queues 00:06:48.543 ================ 00:06:48.543 Number of I/O Submission Queues: 64 00:06:48.543 Number of I/O Completion Queues: 64 00:06:48.543 00:06:48.543 ZNS Specific Controller Data 00:06:48.543 ============================ 00:06:48.543 Zone Append Size Limit: 0 00:06:48.543 00:06:48.543 00:06:48.543 Active Namespaces 00:06:48.543 ================= 00:06:48.543 Namespace ID:1 00:06:48.543 Error Recovery Timeout: Unlimited 00:06:48.543 Command Set Identifier: NVM (00h) 00:06:48.543 Deallocate: Supported 00:06:48.543 Deallocated/Unwritten Error: Supported 00:06:48.543 Deallocated Read Value: All 0x00 00:06:48.543 Deallocate in Write Zeroes: Not Supported 00:06:48.543 Deallocated Guard Field: 0xFFFF 00:06:48.543 Flush: Supported 00:06:48.543 Reservation: Not Supported 00:06:48.543 Namespace Sharing Capabilities: Private 00:06:48.543 Size (in LBAs): 1310720 (5GiB) 00:06:48.543 Capacity (in LBAs): 1310720 (5GiB) 00:06:48.543 Utilization (in LBAs): 1310720 (5GiB) 00:06:48.543 Thin Provisioning: Not Supported 00:06:48.543 Per-NS Atomic Units: No 00:06:48.543 Maximum Single Source Range Length: 128 00:06:48.543 Maximum Copy Length: 128 00:06:48.543 Maximum Source Range Count: 128 00:06:48.543 NGUID/EUI64 Never Reused: No 00:06:48.543 Namespace Write Protected: No 00:06:48.543 Number of LBA Formats: 8 00:06:48.543 Current LBA Format: LBA Format #04 00:06:48.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.543 00:06:48.543 NVM Specific Namespace Data 00:06:48.543 =========================== 00:06:48.543 Logical Block Storage Tag Mask: 0 00:06:48.543 Protection Information Capabilities: 00:06:48.543 16b Guard Protection Information Storage Tag Support: No 00:06:48.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.543 Storage Tag Check Read Support: No 00:06:48.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.543 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:48.543 01:26:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:06:48.801 ===================================================== 00:06:48.801 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:48.802 ===================================================== 00:06:48.802 Controller Capabilities/Features 00:06:48.802 ================================ 00:06:48.802 Vendor ID: 1b36 00:06:48.802 Subsystem Vendor ID: 1af4 00:06:48.802 Serial Number: 12342 00:06:48.802 Model Number: QEMU NVMe Ctrl 00:06:48.802 Firmware Version: 8.0.0 00:06:48.802 Recommended Arb Burst: 6 00:06:48.802 IEEE OUI Identifier: 00 54 52 00:06:48.802 Multi-path I/O 00:06:48.802 May have multiple subsystem ports: No 00:06:48.802 May have multiple controllers: No 00:06:48.802 Associated with SR-IOV VF: No 00:06:48.802 Max Data Transfer Size: 524288 00:06:48.802 Max Number of Namespaces: 256 00:06:48.802 Max Number of I/O Queues: 64 00:06:48.802 NVMe Specification Version (VS): 1.4 00:06:48.802 NVMe Specification Version (Identify): 1.4 00:06:48.802 Maximum Queue Entries: 2048 00:06:48.802 Contiguous Queues Required: Yes 00:06:48.802 Arbitration Mechanisms Supported 00:06:48.802 Weighted Round Robin: Not Supported 00:06:48.802 Vendor Specific: Not Supported 00:06:48.802 Reset Timeout: 7500 ms 00:06:48.802 Doorbell Stride: 4 bytes 00:06:48.802 NVM Subsystem Reset: Not Supported 00:06:48.802 Command Sets Supported 00:06:48.802 NVM Command Set: Supported 00:06:48.802 Boot Partition: Not Supported 00:06:48.802 Memory Page Size Minimum: 4096 bytes 00:06:48.802 Memory Page Size Maximum: 65536 bytes 00:06:48.802 Persistent Memory Region: Not Supported 00:06:48.802 Optional Asynchronous Events Supported 00:06:48.802 Namespace Attribute Notices: Supported 00:06:48.802 Firmware Activation Notices: Not Supported 00:06:48.802 ANA Change Notices: Not Supported 00:06:48.802 PLE Aggregate Log Change Notices: Not Supported 00:06:48.802 LBA Status Info Alert Notices: Not Supported 00:06:48.802 EGE Aggregate Log Change Notices: Not Supported 00:06:48.802 Normal NVM Subsystem Shutdown event: Not Supported 00:06:48.802 Zone Descriptor Change Notices: Not Supported 00:06:48.802 Discovery Log Change Notices: Not Supported 00:06:48.802 Controller Attributes 00:06:48.802 128-bit Host Identifier: Not Supported 00:06:48.802 Non-Operational Permissive Mode: Not Supported 00:06:48.802 NVM Sets: Not Supported 00:06:48.802 Read Recovery Levels: Not Supported 00:06:48.802 Endurance Groups: Not Supported 00:06:48.802 Predictable Latency Mode: Not Supported 00:06:48.802 Traffic Based Keep ALive: Not Supported 00:06:48.802 Namespace Granularity: Not Supported 00:06:48.802 SQ Associations: Not Supported 00:06:48.802 UUID List: Not Supported 00:06:48.802 Multi-Domain Subsystem: Not Supported 00:06:48.802 Fixed Capacity Management: Not Supported 00:06:48.802 Variable Capacity Management: Not Supported 00:06:48.802 Delete Endurance Group: Not Supported 00:06:48.802 Delete NVM Set: Not Supported 00:06:48.802 Extended LBA Formats Supported: Supported 00:06:48.802 Flexible Data Placement Supported: Not Supported 00:06:48.802 00:06:48.802 Controller Memory Buffer Support 00:06:48.802 ================================ 00:06:48.802 Supported: No 00:06:48.802 00:06:48.802 Persistent Memory Region Support 00:06:48.802 ================================ 00:06:48.802 Supported: No 00:06:48.802 00:06:48.802 Admin Command Set Attributes 00:06:48.802 ============================ 00:06:48.802 Security Send/Receive: Not Supported 00:06:48.802 Format NVM: Supported 00:06:48.802 Firmware Activate/Download: Not Supported 00:06:48.802 Namespace Management: Supported 00:06:48.802 Device Self-Test: Not Supported 00:06:48.802 Directives: Supported 00:06:48.802 NVMe-MI: Not Supported 00:06:48.802 Virtualization Management: Not Supported 00:06:48.802 Doorbell Buffer Config: Supported 00:06:48.802 Get LBA Status Capability: Not Supported 00:06:48.802 Command & Feature Lockdown Capability: Not Supported 00:06:48.802 Abort Command Limit: 4 00:06:48.802 Async Event Request Limit: 4 00:06:48.802 Number of Firmware Slots: N/A 00:06:48.802 Firmware Slot 1 Read-Only: N/A 00:06:48.802 Firmware Activation Without Reset: N/A 00:06:48.802 Multiple Update Detection Support: N/A 00:06:48.802 Firmware Update Granularity: No Information Provided 00:06:48.802 Per-Namespace SMART Log: Yes 00:06:48.802 Asymmetric Namespace Access Log Page: Not Supported 00:06:48.802 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:48.802 Command Effects Log Page: Supported 00:06:48.802 Get Log Page Extended Data: Supported 00:06:48.802 Telemetry Log Pages: Not Supported 00:06:48.802 Persistent Event Log Pages: Not Supported 00:06:48.802 Supported Log Pages Log Page: May Support 00:06:48.802 Commands Supported & Effects Log Page: Not Supported 00:06:48.802 Feature Identifiers & Effects Log Page:May Support 00:06:48.802 NVMe-MI Commands & Effects Log Page: May Support 00:06:48.802 Data Area 4 for Telemetry Log: Not Supported 00:06:48.802 Error Log Page Entries Supported: 1 00:06:48.802 Keep Alive: Not Supported 00:06:48.802 00:06:48.802 NVM Command Set Attributes 00:06:48.802 ========================== 00:06:48.802 Submission Queue Entry Size 00:06:48.802 Max: 64 00:06:48.802 Min: 64 00:06:48.802 Completion Queue Entry Size 00:06:48.802 Max: 16 00:06:48.802 Min: 16 00:06:48.802 Number of Namespaces: 256 00:06:48.802 Compare Command: Supported 00:06:48.802 Write Uncorrectable Command: Not Supported 00:06:48.802 Dataset Management Command: Supported 00:06:48.802 Write Zeroes Command: Supported 00:06:48.802 Set Features Save Field: Supported 00:06:48.802 Reservations: Not Supported 00:06:48.802 Timestamp: Supported 00:06:48.802 Copy: Supported 00:06:48.802 Volatile Write Cache: Present 00:06:48.802 Atomic Write Unit (Normal): 1 00:06:48.802 Atomic Write Unit (PFail): 1 00:06:48.802 Atomic Compare & Write Unit: 1 00:06:48.802 Fused Compare & Write: Not Supported 00:06:48.802 Scatter-Gather List 00:06:48.802 SGL Command Set: Supported 00:06:48.802 SGL Keyed: Not Supported 00:06:48.802 SGL Bit Bucket Descriptor: Not Supported 00:06:48.802 SGL Metadata Pointer: Not Supported 00:06:48.802 Oversized SGL: Not Supported 00:06:48.802 SGL Metadata Address: Not Supported 00:06:48.802 SGL Offset: Not Supported 00:06:48.802 Transport SGL Data Block: Not Supported 00:06:48.802 Replay Protected Memory Block: Not Supported 00:06:48.802 00:06:48.802 Firmware Slot Information 00:06:48.802 ========================= 00:06:48.802 Active slot: 1 00:06:48.802 Slot 1 Firmware Revision: 1.0 00:06:48.802 00:06:48.802 00:06:48.802 Commands Supported and Effects 00:06:48.802 ============================== 00:06:48.802 Admin Commands 00:06:48.802 -------------- 00:06:48.802 Delete I/O Submission Queue (00h): Supported 00:06:48.802 Create I/O Submission Queue (01h): Supported 00:06:48.802 Get Log Page (02h): Supported 00:06:48.802 Delete I/O Completion Queue (04h): Supported 00:06:48.802 Create I/O Completion Queue (05h): Supported 00:06:48.802 Identify (06h): Supported 00:06:48.802 Abort (08h): Supported 00:06:48.802 Set Features (09h): Supported 00:06:48.802 Get Features (0Ah): Supported 00:06:48.802 Asynchronous Event Request (0Ch): Supported 00:06:48.802 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:48.802 Directive Send (19h): Supported 00:06:48.802 Directive Receive (1Ah): Supported 00:06:48.802 Virtualization Management (1Ch): Supported 00:06:48.802 Doorbell Buffer Config (7Ch): Supported 00:06:48.802 Format NVM (80h): Supported LBA-Change 00:06:48.802 I/O Commands 00:06:48.802 ------------ 00:06:48.802 Flush (00h): Supported LBA-Change 00:06:48.802 Write (01h): Supported LBA-Change 00:06:48.802 Read (02h): Supported 00:06:48.802 Compare (05h): Supported 00:06:48.802 Write Zeroes (08h): Supported LBA-Change 00:06:48.802 Dataset Management (09h): Supported LBA-Change 00:06:48.802 Unknown (0Ch): Supported 00:06:48.802 Unknown (12h): Supported 00:06:48.802 Copy (19h): Supported LBA-Change 00:06:48.802 Unknown (1Dh): Supported LBA-Change 00:06:48.802 00:06:48.802 Error Log 00:06:48.802 ========= 00:06:48.802 00:06:48.802 Arbitration 00:06:48.802 =========== 00:06:48.802 Arbitration Burst: no limit 00:06:48.802 00:06:48.802 Power Management 00:06:48.802 ================ 00:06:48.802 Number of Power States: 1 00:06:48.802 Current Power State: Power State #0 00:06:48.802 Power State #0: 00:06:48.802 Max Power: 25.00 W 00:06:48.802 Non-Operational State: Operational 00:06:48.802 Entry Latency: 16 microseconds 00:06:48.802 Exit Latency: 4 microseconds 00:06:48.802 Relative Read Throughput: 0 00:06:48.802 Relative Read Latency: 0 00:06:48.802 Relative Write Throughput: 0 00:06:48.803 Relative Write Latency: 0 00:06:48.803 Idle Power: Not Reported 00:06:48.803 Active Power: Not Reported 00:06:48.803 Non-Operational Permissive Mode: Not Supported 00:06:48.803 00:06:48.803 Health Information 00:06:48.803 ================== 00:06:48.803 Critical Warnings: 00:06:48.803 Available Spare Space: OK 00:06:48.803 Temperature: OK 00:06:48.803 Device Reliability: OK 00:06:48.803 Read Only: No 00:06:48.803 Volatile Memory Backup: OK 00:06:48.803 Current Temperature: 323 Kelvin (50 Celsius) 00:06:48.803 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:48.803 Available Spare: 0% 00:06:48.803 Available Spare Threshold: 0% 00:06:48.803 Life Percentage Used: 0% 00:06:48.803 Data Units Read: 2194 00:06:48.803 Data Units Written: 1981 00:06:48.803 Host Read Commands: 114098 00:06:48.803 Host Write Commands: 112367 00:06:48.803 Controller Busy Time: 0 minutes 00:06:48.803 Power Cycles: 0 00:06:48.803 Power On Hours: 0 hours 00:06:48.803 Unsafe Shutdowns: 0 00:06:48.803 Unrecoverable Media Errors: 0 00:06:48.803 Lifetime Error Log Entries: 0 00:06:48.803 Warning Temperature Time: 0 minutes 00:06:48.803 Critical Temperature Time: 0 minutes 00:06:48.803 00:06:48.803 Number of Queues 00:06:48.803 ================ 00:06:48.803 Number of I/O Submission Queues: 64 00:06:48.803 Number of I/O Completion Queues: 64 00:06:48.803 00:06:48.803 ZNS Specific Controller Data 00:06:48.803 ============================ 00:06:48.803 Zone Append Size Limit: 0 00:06:48.803 00:06:48.803 00:06:48.803 Active Namespaces 00:06:48.803 ================= 00:06:48.803 Namespace ID:1 00:06:48.803 Error Recovery Timeout: Unlimited 00:06:48.803 Command Set Identifier: NVM (00h) 00:06:48.803 Deallocate: Supported 00:06:48.803 Deallocated/Unwritten Error: Supported 00:06:48.803 Deallocated Read Value: All 0x00 00:06:48.803 Deallocate in Write Zeroes: Not Supported 00:06:48.803 Deallocated Guard Field: 0xFFFF 00:06:48.803 Flush: Supported 00:06:48.803 Reservation: Not Supported 00:06:48.803 Namespace Sharing Capabilities: Private 00:06:48.803 Size (in LBAs): 1048576 (4GiB) 00:06:48.803 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.803 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.803 Thin Provisioning: Not Supported 00:06:48.803 Per-NS Atomic Units: No 00:06:48.803 Maximum Single Source Range Length: 128 00:06:48.803 Maximum Copy Length: 128 00:06:48.803 Maximum Source Range Count: 128 00:06:48.803 NGUID/EUI64 Never Reused: No 00:06:48.803 Namespace Write Protected: No 00:06:48.803 Number of LBA Formats: 8 00:06:48.803 Current LBA Format: LBA Format #04 00:06:48.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.803 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.803 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.803 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.803 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.803 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.803 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.803 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.803 00:06:48.803 NVM Specific Namespace Data 00:06:48.803 =========================== 00:06:48.803 Logical Block Storage Tag Mask: 0 00:06:48.803 Protection Information Capabilities: 00:06:48.803 16b Guard Protection Information Storage Tag Support: No 00:06:48.803 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.803 Storage Tag Check Read Support: No 00:06:48.803 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Namespace ID:2 00:06:48.803 Error Recovery Timeout: Unlimited 00:06:48.803 Command Set Identifier: NVM (00h) 00:06:48.803 Deallocate: Supported 00:06:48.803 Deallocated/Unwritten Error: Supported 00:06:48.803 Deallocated Read Value: All 0x00 00:06:48.803 Deallocate in Write Zeroes: Not Supported 00:06:48.803 Deallocated Guard Field: 0xFFFF 00:06:48.803 Flush: Supported 00:06:48.803 Reservation: Not Supported 00:06:48.803 Namespace Sharing Capabilities: Private 00:06:48.803 Size (in LBAs): 1048576 (4GiB) 00:06:48.803 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.803 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.803 Thin Provisioning: Not Supported 00:06:48.803 Per-NS Atomic Units: No 00:06:48.803 Maximum Single Source Range Length: 128 00:06:48.803 Maximum Copy Length: 128 00:06:48.803 Maximum Source Range Count: 128 00:06:48.803 NGUID/EUI64 Never Reused: No 00:06:48.803 Namespace Write Protected: No 00:06:48.803 Number of LBA Formats: 8 00:06:48.803 Current LBA Format: LBA Format #04 00:06:48.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.803 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.803 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.803 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.803 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.803 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.803 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.803 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.803 00:06:48.803 NVM Specific Namespace Data 00:06:48.803 =========================== 00:06:48.803 Logical Block Storage Tag Mask: 0 00:06:48.803 Protection Information Capabilities: 00:06:48.803 16b Guard Protection Information Storage Tag Support: No 00:06:48.803 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.803 Storage Tag Check Read Support: No 00:06:48.803 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Namespace ID:3 00:06:48.803 Error Recovery Timeout: Unlimited 00:06:48.803 Command Set Identifier: NVM (00h) 00:06:48.803 Deallocate: Supported 00:06:48.803 Deallocated/Unwritten Error: Supported 00:06:48.803 Deallocated Read Value: All 0x00 00:06:48.803 Deallocate in Write Zeroes: Not Supported 00:06:48.803 Deallocated Guard Field: 0xFFFF 00:06:48.803 Flush: Supported 00:06:48.803 Reservation: Not Supported 00:06:48.803 Namespace Sharing Capabilities: Private 00:06:48.803 Size (in LBAs): 1048576 (4GiB) 00:06:48.803 Capacity (in LBAs): 1048576 (4GiB) 00:06:48.803 Utilization (in LBAs): 1048576 (4GiB) 00:06:48.803 Thin Provisioning: Not Supported 00:06:48.803 Per-NS Atomic Units: No 00:06:48.803 Maximum Single Source Range Length: 128 00:06:48.803 Maximum Copy Length: 128 00:06:48.803 Maximum Source Range Count: 128 00:06:48.803 NGUID/EUI64 Never Reused: No 00:06:48.803 Namespace Write Protected: No 00:06:48.803 Number of LBA Formats: 8 00:06:48.803 Current LBA Format: LBA Format #04 00:06:48.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:48.803 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:48.803 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:48.803 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:48.803 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:48.803 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:48.803 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:48.803 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:48.803 00:06:48.803 NVM Specific Namespace Data 00:06:48.803 =========================== 00:06:48.803 Logical Block Storage Tag Mask: 0 00:06:48.803 Protection Information Capabilities: 00:06:48.803 16b Guard Protection Information Storage Tag Support: No 00:06:48.803 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:48.803 Storage Tag Check Read Support: No 00:06:48.803 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.803 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.804 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.804 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:48.804 01:26:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:48.804 01:26:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:06:49.063 ===================================================== 00:06:49.063 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:49.063 ===================================================== 00:06:49.063 Controller Capabilities/Features 00:06:49.063 ================================ 00:06:49.063 Vendor ID: 1b36 00:06:49.063 Subsystem Vendor ID: 1af4 00:06:49.063 Serial Number: 12343 00:06:49.063 Model Number: QEMU NVMe Ctrl 00:06:49.063 Firmware Version: 8.0.0 00:06:49.063 Recommended Arb Burst: 6 00:06:49.063 IEEE OUI Identifier: 00 54 52 00:06:49.063 Multi-path I/O 00:06:49.063 May have multiple subsystem ports: No 00:06:49.063 May have multiple controllers: Yes 00:06:49.063 Associated with SR-IOV VF: No 00:06:49.063 Max Data Transfer Size: 524288 00:06:49.063 Max Number of Namespaces: 256 00:06:49.063 Max Number of I/O Queues: 64 00:06:49.063 NVMe Specification Version (VS): 1.4 00:06:49.063 NVMe Specification Version (Identify): 1.4 00:06:49.063 Maximum Queue Entries: 2048 00:06:49.063 Contiguous Queues Required: Yes 00:06:49.063 Arbitration Mechanisms Supported 00:06:49.063 Weighted Round Robin: Not Supported 00:06:49.063 Vendor Specific: Not Supported 00:06:49.063 Reset Timeout: 7500 ms 00:06:49.063 Doorbell Stride: 4 bytes 00:06:49.063 NVM Subsystem Reset: Not Supported 00:06:49.063 Command Sets Supported 00:06:49.063 NVM Command Set: Supported 00:06:49.063 Boot Partition: Not Supported 00:06:49.063 Memory Page Size Minimum: 4096 bytes 00:06:49.063 Memory Page Size Maximum: 65536 bytes 00:06:49.063 Persistent Memory Region: Not Supported 00:06:49.063 Optional Asynchronous Events Supported 00:06:49.063 Namespace Attribute Notices: Supported 00:06:49.063 Firmware Activation Notices: Not Supported 00:06:49.063 ANA Change Notices: Not Supported 00:06:49.063 PLE Aggregate Log Change Notices: Not Supported 00:06:49.063 LBA Status Info Alert Notices: Not Supported 00:06:49.063 EGE Aggregate Log Change Notices: Not Supported 00:06:49.063 Normal NVM Subsystem Shutdown event: Not Supported 00:06:49.063 Zone Descriptor Change Notices: Not Supported 00:06:49.063 Discovery Log Change Notices: Not Supported 00:06:49.063 Controller Attributes 00:06:49.063 128-bit Host Identifier: Not Supported 00:06:49.063 Non-Operational Permissive Mode: Not Supported 00:06:49.063 NVM Sets: Not Supported 00:06:49.063 Read Recovery Levels: Not Supported 00:06:49.063 Endurance Groups: Supported 00:06:49.063 Predictable Latency Mode: Not Supported 00:06:49.063 Traffic Based Keep ALive: Not Supported 00:06:49.063 Namespace Granularity: Not Supported 00:06:49.063 SQ Associations: Not Supported 00:06:49.063 UUID List: Not Supported 00:06:49.063 Multi-Domain Subsystem: Not Supported 00:06:49.063 Fixed Capacity Management: Not Supported 00:06:49.063 Variable Capacity Management: Not Supported 00:06:49.063 Delete Endurance Group: Not Supported 00:06:49.063 Delete NVM Set: Not Supported 00:06:49.063 Extended LBA Formats Supported: Supported 00:06:49.063 Flexible Data Placement Supported: Supported 00:06:49.063 00:06:49.063 Controller Memory Buffer Support 00:06:49.063 ================================ 00:06:49.063 Supported: No 00:06:49.063 00:06:49.063 Persistent Memory Region Support 00:06:49.063 ================================ 00:06:49.063 Supported: No 00:06:49.063 00:06:49.063 Admin Command Set Attributes 00:06:49.063 ============================ 00:06:49.063 Security Send/Receive: Not Supported 00:06:49.063 Format NVM: Supported 00:06:49.063 Firmware Activate/Download: Not Supported 00:06:49.063 Namespace Management: Supported 00:06:49.063 Device Self-Test: Not Supported 00:06:49.063 Directives: Supported 00:06:49.063 NVMe-MI: Not Supported 00:06:49.063 Virtualization Management: Not Supported 00:06:49.063 Doorbell Buffer Config: Supported 00:06:49.063 Get LBA Status Capability: Not Supported 00:06:49.064 Command & Feature Lockdown Capability: Not Supported 00:06:49.064 Abort Command Limit: 4 00:06:49.064 Async Event Request Limit: 4 00:06:49.064 Number of Firmware Slots: N/A 00:06:49.064 Firmware Slot 1 Read-Only: N/A 00:06:49.064 Firmware Activation Without Reset: N/A 00:06:49.064 Multiple Update Detection Support: N/A 00:06:49.064 Firmware Update Granularity: No Information Provided 00:06:49.064 Per-Namespace SMART Log: Yes 00:06:49.064 Asymmetric Namespace Access Log Page: Not Supported 00:06:49.064 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:49.064 Command Effects Log Page: Supported 00:06:49.064 Get Log Page Extended Data: Supported 00:06:49.064 Telemetry Log Pages: Not Supported 00:06:49.064 Persistent Event Log Pages: Not Supported 00:06:49.064 Supported Log Pages Log Page: May Support 00:06:49.064 Commands Supported & Effects Log Page: Not Supported 00:06:49.064 Feature Identifiers & Effects Log Page:May Support 00:06:49.064 NVMe-MI Commands & Effects Log Page: May Support 00:06:49.064 Data Area 4 for Telemetry Log: Not Supported 00:06:49.064 Error Log Page Entries Supported: 1 00:06:49.064 Keep Alive: Not Supported 00:06:49.064 00:06:49.064 NVM Command Set Attributes 00:06:49.064 ========================== 00:06:49.064 Submission Queue Entry Size 00:06:49.064 Max: 64 00:06:49.064 Min: 64 00:06:49.064 Completion Queue Entry Size 00:06:49.064 Max: 16 00:06:49.064 Min: 16 00:06:49.064 Number of Namespaces: 256 00:06:49.064 Compare Command: Supported 00:06:49.064 Write Uncorrectable Command: Not Supported 00:06:49.064 Dataset Management Command: Supported 00:06:49.064 Write Zeroes Command: Supported 00:06:49.064 Set Features Save Field: Supported 00:06:49.064 Reservations: Not Supported 00:06:49.064 Timestamp: Supported 00:06:49.064 Copy: Supported 00:06:49.064 Volatile Write Cache: Present 00:06:49.064 Atomic Write Unit (Normal): 1 00:06:49.064 Atomic Write Unit (PFail): 1 00:06:49.064 Atomic Compare & Write Unit: 1 00:06:49.064 Fused Compare & Write: Not Supported 00:06:49.064 Scatter-Gather List 00:06:49.064 SGL Command Set: Supported 00:06:49.064 SGL Keyed: Not Supported 00:06:49.064 SGL Bit Bucket Descriptor: Not Supported 00:06:49.064 SGL Metadata Pointer: Not Supported 00:06:49.064 Oversized SGL: Not Supported 00:06:49.064 SGL Metadata Address: Not Supported 00:06:49.064 SGL Offset: Not Supported 00:06:49.064 Transport SGL Data Block: Not Supported 00:06:49.064 Replay Protected Memory Block: Not Supported 00:06:49.064 00:06:49.064 Firmware Slot Information 00:06:49.064 ========================= 00:06:49.064 Active slot: 1 00:06:49.064 Slot 1 Firmware Revision: 1.0 00:06:49.064 00:06:49.064 00:06:49.064 Commands Supported and Effects 00:06:49.064 ============================== 00:06:49.064 Admin Commands 00:06:49.064 -------------- 00:06:49.064 Delete I/O Submission Queue (00h): Supported 00:06:49.064 Create I/O Submission Queue (01h): Supported 00:06:49.064 Get Log Page (02h): Supported 00:06:49.064 Delete I/O Completion Queue (04h): Supported 00:06:49.064 Create I/O Completion Queue (05h): Supported 00:06:49.064 Identify (06h): Supported 00:06:49.064 Abort (08h): Supported 00:06:49.064 Set Features (09h): Supported 00:06:49.064 Get Features (0Ah): Supported 00:06:49.064 Asynchronous Event Request (0Ch): Supported 00:06:49.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:49.064 Directive Send (19h): Supported 00:06:49.064 Directive Receive (1Ah): Supported 00:06:49.064 Virtualization Management (1Ch): Supported 00:06:49.064 Doorbell Buffer Config (7Ch): Supported 00:06:49.064 Format NVM (80h): Supported LBA-Change 00:06:49.064 I/O Commands 00:06:49.064 ------------ 00:06:49.064 Flush (00h): Supported LBA-Change 00:06:49.064 Write (01h): Supported LBA-Change 00:06:49.064 Read (02h): Supported 00:06:49.064 Compare (05h): Supported 00:06:49.064 Write Zeroes (08h): Supported LBA-Change 00:06:49.064 Dataset Management (09h): Supported LBA-Change 00:06:49.064 Unknown (0Ch): Supported 00:06:49.064 Unknown (12h): Supported 00:06:49.064 Copy (19h): Supported LBA-Change 00:06:49.064 Unknown (1Dh): Supported LBA-Change 00:06:49.064 00:06:49.064 Error Log 00:06:49.064 ========= 00:06:49.064 00:06:49.064 Arbitration 00:06:49.064 =========== 00:06:49.064 Arbitration Burst: no limit 00:06:49.064 00:06:49.064 Power Management 00:06:49.064 ================ 00:06:49.064 Number of Power States: 1 00:06:49.064 Current Power State: Power State #0 00:06:49.064 Power State #0: 00:06:49.064 Max Power: 25.00 W 00:06:49.064 Non-Operational State: Operational 00:06:49.064 Entry Latency: 16 microseconds 00:06:49.064 Exit Latency: 4 microseconds 00:06:49.064 Relative Read Throughput: 0 00:06:49.064 Relative Read Latency: 0 00:06:49.064 Relative Write Throughput: 0 00:06:49.064 Relative Write Latency: 0 00:06:49.064 Idle Power: Not Reported 00:06:49.064 Active Power: Not Reported 00:06:49.064 Non-Operational Permissive Mode: Not Supported 00:06:49.064 00:06:49.064 Health Information 00:06:49.064 ================== 00:06:49.064 Critical Warnings: 00:06:49.064 Available Spare Space: OK 00:06:49.064 Temperature: OK 00:06:49.064 Device Reliability: OK 00:06:49.064 Read Only: No 00:06:49.064 Volatile Memory Backup: OK 00:06:49.064 Current Temperature: 323 Kelvin (50 Celsius) 00:06:49.064 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:49.064 Available Spare: 0% 00:06:49.064 Available Spare Threshold: 0% 00:06:49.064 Life Percentage Used: 0% 00:06:49.064 Data Units Read: 874 00:06:49.064 Data Units Written: 803 00:06:49.064 Host Read Commands: 39244 00:06:49.064 Host Write Commands: 38667 00:06:49.064 Controller Busy Time: 0 minutes 00:06:49.064 Power Cycles: 0 00:06:49.064 Power On Hours: 0 hours 00:06:49.064 Unsafe Shutdowns: 0 00:06:49.064 Unrecoverable Media Errors: 0 00:06:49.064 Lifetime Error Log Entries: 0 00:06:49.064 Warning Temperature Time: 0 minutes 00:06:49.064 Critical Temperature Time: 0 minutes 00:06:49.064 00:06:49.064 Number of Queues 00:06:49.064 ================ 00:06:49.064 Number of I/O Submission Queues: 64 00:06:49.064 Number of I/O Completion Queues: 64 00:06:49.064 00:06:49.064 ZNS Specific Controller Data 00:06:49.064 ============================ 00:06:49.064 Zone Append Size Limit: 0 00:06:49.064 00:06:49.064 00:06:49.064 Active Namespaces 00:06:49.064 ================= 00:06:49.064 Namespace ID:1 00:06:49.064 Error Recovery Timeout: Unlimited 00:06:49.064 Command Set Identifier: NVM (00h) 00:06:49.064 Deallocate: Supported 00:06:49.064 Deallocated/Unwritten Error: Supported 00:06:49.064 Deallocated Read Value: All 0x00 00:06:49.064 Deallocate in Write Zeroes: Not Supported 00:06:49.064 Deallocated Guard Field: 0xFFFF 00:06:49.064 Flush: Supported 00:06:49.064 Reservation: Not Supported 00:06:49.064 Namespace Sharing Capabilities: Multiple Controllers 00:06:49.064 Size (in LBAs): 262144 (1GiB) 00:06:49.064 Capacity (in LBAs): 262144 (1GiB) 00:06:49.064 Utilization (in LBAs): 262144 (1GiB) 00:06:49.064 Thin Provisioning: Not Supported 00:06:49.064 Per-NS Atomic Units: No 00:06:49.064 Maximum Single Source Range Length: 128 00:06:49.064 Maximum Copy Length: 128 00:06:49.064 Maximum Source Range Count: 128 00:06:49.064 NGUID/EUI64 Never Reused: No 00:06:49.064 Namespace Write Protected: No 00:06:49.064 Endurance group ID: 1 00:06:49.064 Number of LBA Formats: 8 00:06:49.064 Current LBA Format: LBA Format #04 00:06:49.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:49.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:49.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:49.064 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:49.064 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:49.064 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:49.064 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:49.064 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:49.064 00:06:49.064 Get Feature FDP: 00:06:49.064 ================ 00:06:49.064 Enabled: Yes 00:06:49.064 FDP configuration index: 0 00:06:49.064 00:06:49.064 FDP configurations log page 00:06:49.064 =========================== 00:06:49.064 Number of FDP configurations: 1 00:06:49.064 Version: 0 00:06:49.064 Size: 112 00:06:49.064 FDP Configuration Descriptor: 0 00:06:49.064 Descriptor Size: 96 00:06:49.064 Reclaim Group Identifier format: 2 00:06:49.064 FDP Volatile Write Cache: Not Present 00:06:49.064 FDP Configuration: Valid 00:06:49.064 Vendor Specific Size: 0 00:06:49.064 Number of Reclaim Groups: 2 00:06:49.064 Number of Recalim Unit Handles: 8 00:06:49.065 Max Placement Identifiers: 128 00:06:49.065 Number of Namespaces Suppprted: 256 00:06:49.065 Reclaim unit Nominal Size: 6000000 bytes 00:06:49.065 Estimated Reclaim Unit Time Limit: Not Reported 00:06:49.065 RUH Desc #000: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #001: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #002: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #003: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #004: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #005: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #006: RUH Type: Initially Isolated 00:06:49.065 RUH Desc #007: RUH Type: Initially Isolated 00:06:49.065 00:06:49.065 FDP reclaim unit handle usage log page 00:06:49.065 ====================================== 00:06:49.065 Number of Reclaim Unit Handles: 8 00:06:49.065 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:49.065 RUH Usage Desc #001: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #002: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #003: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #004: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #005: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #006: RUH Attributes: Unused 00:06:49.065 RUH Usage Desc #007: RUH Attributes: Unused 00:06:49.065 00:06:49.065 FDP statistics log page 00:06:49.065 ======================= 00:06:49.065 Host bytes with metadata written: 520658944 00:06:49.065 Media bytes with metadata written: 520716288 00:06:49.065 Media bytes erased: 0 00:06:49.065 00:06:49.065 FDP events log page 00:06:49.065 =================== 00:06:49.065 Number of FDP events: 0 00:06:49.065 00:06:49.065 NVM Specific Namespace Data 00:06:49.065 =========================== 00:06:49.065 Logical Block Storage Tag Mask: 0 00:06:49.065 Protection Information Capabilities: 00:06:49.065 16b Guard Protection Information Storage Tag Support: No 00:06:49.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:49.065 Storage Tag Check Read Support: No 00:06:49.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:49.065 00:06:49.065 real 0m1.243s 00:06:49.065 user 0m0.438s 00:06:49.065 sys 0m0.581s 00:06:49.065 01:26:57 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.065 ************************************ 00:06:49.065 END TEST nvme_identify 00:06:49.065 ************************************ 00:06:49.065 01:26:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:06:49.065 01:26:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:06:49.065 01:26:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.065 01:26:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.065 01:26:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.065 ************************************ 00:06:49.065 START TEST nvme_perf 00:06:49.065 ************************************ 00:06:49.065 01:26:57 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:06:49.065 01:26:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:06:50.440 Initializing NVMe Controllers 00:06:50.440 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:50.440 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:50.440 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:50.440 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:50.440 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:50.440 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:50.440 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:50.440 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:50.440 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:50.440 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:50.440 Initialization complete. Launching workers. 00:06:50.440 ======================================================== 00:06:50.440 Latency(us) 00:06:50.440 Device Information : IOPS MiB/s Average min max 00:06:50.440 PCIE (0000:00:10.0) NSID 1 from core 0: 19012.13 222.80 6740.92 5575.69 32451.13 00:06:50.440 PCIE (0000:00:11.0) NSID 1 from core 0: 19012.13 222.80 6731.94 5646.54 30703.57 00:06:50.440 PCIE (0000:00:13.0) NSID 1 from core 0: 19012.13 222.80 6721.74 5702.97 29341.52 00:06:50.440 PCIE (0000:00:12.0) NSID 1 from core 0: 19012.13 222.80 6711.32 5700.37 27533.36 00:06:50.440 PCIE (0000:00:12.0) NSID 2 from core 0: 19012.13 222.80 6701.04 5702.51 25778.05 00:06:50.440 PCIE (0000:00:12.0) NSID 3 from core 0: 19075.93 223.55 6668.46 5679.94 20724.95 00:06:50.440 ======================================================== 00:06:50.440 Total : 114136.58 1337.54 6712.55 5575.69 32451.13 00:06:50.440 00:06:50.440 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:50.440 ================================================================================= 00:06:50.440 1.00000% : 5747.003us 00:06:50.440 10.00000% : 5898.240us 00:06:50.440 25.00000% : 6074.683us 00:06:50.440 50.00000% : 6377.157us 00:06:50.440 75.00000% : 6654.425us 00:06:50.440 90.00000% : 7208.960us 00:06:50.440 95.00000% : 9376.689us 00:06:50.440 98.00000% : 11241.945us 00:06:50.440 99.00000% : 12048.542us 00:06:50.440 99.50000% : 27222.646us 00:06:50.440 99.90000% : 32062.228us 00:06:50.440 99.99000% : 32465.526us 00:06:50.440 99.99900% : 32465.526us 00:06:50.440 99.99990% : 32465.526us 00:06:50.440 99.99999% : 32465.526us 00:06:50.440 00:06:50.440 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:50.440 ================================================================================= 00:06:50.440 1.00000% : 5822.622us 00:06:50.440 10.00000% : 5948.652us 00:06:50.440 25.00000% : 6125.095us 00:06:50.440 50.00000% : 6351.951us 00:06:50.440 75.00000% : 6604.012us 00:06:50.440 90.00000% : 7158.548us 00:06:50.440 95.00000% : 9527.926us 00:06:50.440 98.00000% : 11090.708us 00:06:50.440 99.00000% : 12451.840us 00:06:50.440 99.50000% : 25508.628us 00:06:50.440 99.90000% : 30449.034us 00:06:50.440 99.99000% : 30852.332us 00:06:50.440 99.99900% : 30852.332us 00:06:50.441 99.99990% : 30852.332us 00:06:50.441 99.99999% : 30852.332us 00:06:50.441 00:06:50.441 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:50.441 ================================================================================= 00:06:50.441 1.00000% : 5822.622us 00:06:50.441 10.00000% : 5948.652us 00:06:50.441 25.00000% : 6099.889us 00:06:50.441 50.00000% : 6351.951us 00:06:50.441 75.00000% : 6604.012us 00:06:50.441 90.00000% : 7158.548us 00:06:50.441 95.00000% : 9578.338us 00:06:50.441 98.00000% : 11040.295us 00:06:50.441 99.00000% : 12603.077us 00:06:50.441 99.50000% : 24197.908us 00:06:50.441 99.90000% : 29037.489us 00:06:50.441 99.99000% : 29440.788us 00:06:50.441 99.99900% : 29440.788us 00:06:50.441 99.99990% : 29440.788us 00:06:50.441 99.99999% : 29440.788us 00:06:50.441 00:06:50.441 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:50.441 ================================================================================= 00:06:50.441 1.00000% : 5822.622us 00:06:50.441 10.00000% : 5948.652us 00:06:50.441 25.00000% : 6125.095us 00:06:50.441 50.00000% : 6351.951us 00:06:50.441 75.00000% : 6604.012us 00:06:50.441 90.00000% : 7108.135us 00:06:50.441 95.00000% : 9679.163us 00:06:50.441 98.00000% : 11141.120us 00:06:50.441 99.00000% : 12603.077us 00:06:50.441 99.50000% : 22383.065us 00:06:50.441 99.90000% : 27222.646us 00:06:50.441 99.99000% : 27625.945us 00:06:50.441 99.99900% : 27625.945us 00:06:50.441 99.99990% : 27625.945us 00:06:50.441 99.99999% : 27625.945us 00:06:50.441 00:06:50.441 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:50.441 ================================================================================= 00:06:50.441 1.00000% : 5822.622us 00:06:50.441 10.00000% : 5948.652us 00:06:50.441 25.00000% : 6125.095us 00:06:50.441 50.00000% : 6351.951us 00:06:50.441 75.00000% : 6604.012us 00:06:50.441 90.00000% : 7208.960us 00:06:50.441 95.00000% : 9628.751us 00:06:50.441 98.00000% : 11090.708us 00:06:50.441 99.00000% : 12351.015us 00:06:50.441 99.50000% : 20669.046us 00:06:50.441 99.90000% : 25306.978us 00:06:50.441 99.99000% : 25811.102us 00:06:50.441 99.99900% : 25811.102us 00:06:50.441 99.99990% : 25811.102us 00:06:50.441 99.99999% : 25811.102us 00:06:50.441 00:06:50.441 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:50.441 ================================================================================= 00:06:50.441 1.00000% : 5822.622us 00:06:50.441 10.00000% : 5948.652us 00:06:50.441 25.00000% : 6125.095us 00:06:50.441 50.00000% : 6351.951us 00:06:50.441 75.00000% : 6604.012us 00:06:50.441 90.00000% : 7208.960us 00:06:50.441 95.00000% : 9427.102us 00:06:50.441 98.00000% : 11241.945us 00:06:50.441 99.00000% : 11897.305us 00:06:50.441 99.50000% : 15526.991us 00:06:50.441 99.90000% : 20265.748us 00:06:50.441 99.99000% : 20769.871us 00:06:50.441 99.99900% : 20769.871us 00:06:50.441 99.99990% : 20769.871us 00:06:50.441 99.99999% : 20769.871us 00:06:50.441 00:06:50.441 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:50.441 ============================================================================== 00:06:50.441 Range in us Cumulative IO count 00:06:50.441 5570.560 - 5595.766: 0.0105% ( 2) 00:06:50.441 5595.766 - 5620.972: 0.0367% ( 5) 00:06:50.441 5620.972 - 5646.178: 0.0786% ( 8) 00:06:50.441 5646.178 - 5671.385: 0.2307% ( 29) 00:06:50.441 5671.385 - 5696.591: 0.4719% ( 46) 00:06:50.441 5696.591 - 5721.797: 0.9753% ( 96) 00:06:50.441 5721.797 - 5747.003: 1.6516% ( 129) 00:06:50.441 5747.003 - 5772.209: 2.7055% ( 201) 00:06:50.441 5772.209 - 5797.415: 4.1579% ( 277) 00:06:50.441 5797.415 - 5822.622: 5.6156% ( 278) 00:06:50.441 5822.622 - 5847.828: 7.3354% ( 328) 00:06:50.441 5847.828 - 5873.034: 9.4432% ( 402) 00:06:50.441 5873.034 - 5898.240: 11.4933% ( 391) 00:06:50.441 5898.240 - 5923.446: 13.4805% ( 379) 00:06:50.441 5923.446 - 5948.652: 15.3366% ( 354) 00:06:50.441 5948.652 - 5973.858: 17.3553% ( 385) 00:06:50.441 5973.858 - 5999.065: 19.3635% ( 383) 00:06:50.441 5999.065 - 6024.271: 21.4451% ( 397) 00:06:50.441 6024.271 - 6049.477: 23.3536% ( 364) 00:06:50.441 6049.477 - 6074.683: 25.4509% ( 400) 00:06:50.441 6074.683 - 6099.889: 27.5902% ( 408) 00:06:50.441 6099.889 - 6125.095: 29.7085% ( 404) 00:06:50.441 6125.095 - 6150.302: 31.8268% ( 404) 00:06:50.441 6150.302 - 6175.508: 33.9818% ( 411) 00:06:50.441 6175.508 - 6200.714: 36.0214% ( 389) 00:06:50.441 6200.714 - 6225.920: 38.2288% ( 421) 00:06:50.441 6225.920 - 6251.126: 40.4572% ( 425) 00:06:50.441 6251.126 - 6276.332: 42.5493% ( 399) 00:06:50.441 6276.332 - 6301.538: 44.7200% ( 414) 00:06:50.441 6301.538 - 6326.745: 46.8855% ( 413) 00:06:50.441 6326.745 - 6351.951: 49.0143% ( 406) 00:06:50.441 6351.951 - 6377.157: 51.3056% ( 437) 00:06:50.441 6377.157 - 6402.363: 53.4815% ( 415) 00:06:50.441 6402.363 - 6427.569: 55.6575% ( 415) 00:06:50.441 6427.569 - 6452.775: 57.9069% ( 429) 00:06:50.441 6452.775 - 6503.188: 62.2064% ( 820) 00:06:50.441 6503.188 - 6553.600: 66.7051% ( 858) 00:06:50.441 6553.600 - 6604.012: 71.0990% ( 838) 00:06:50.441 6604.012 - 6654.425: 75.4876% ( 837) 00:06:50.441 6654.425 - 6704.837: 79.6665% ( 797) 00:06:50.441 6704.837 - 6755.249: 83.1690% ( 668) 00:06:50.441 6755.249 - 6805.662: 85.6281% ( 469) 00:06:50.441 6805.662 - 6856.074: 87.0176% ( 265) 00:06:50.441 6856.074 - 6906.486: 87.7779% ( 145) 00:06:50.441 6906.486 - 6956.898: 88.3442% ( 108) 00:06:50.441 6956.898 - 7007.311: 88.8108% ( 89) 00:06:50.441 7007.311 - 7057.723: 89.2460% ( 83) 00:06:50.441 7057.723 - 7108.135: 89.6235% ( 72) 00:06:50.441 7108.135 - 7158.548: 89.9381% ( 60) 00:06:50.441 7158.548 - 7208.960: 90.2055% ( 51) 00:06:50.441 7208.960 - 7259.372: 90.4729% ( 51) 00:06:50.441 7259.372 - 7309.785: 90.6669% ( 37) 00:06:50.441 7309.785 - 7360.197: 90.8714% ( 39) 00:06:50.441 7360.197 - 7410.609: 91.0654% ( 37) 00:06:50.441 7410.609 - 7461.022: 91.2280% ( 31) 00:06:50.441 7461.022 - 7511.434: 91.3800% ( 29) 00:06:50.441 7511.434 - 7561.846: 91.5321% ( 29) 00:06:50.441 7561.846 - 7612.258: 91.6841% ( 29) 00:06:50.441 7612.258 - 7662.671: 91.7995% ( 22) 00:06:50.441 7662.671 - 7713.083: 91.9148% ( 22) 00:06:50.441 7713.083 - 7763.495: 92.0250% ( 21) 00:06:50.441 7763.495 - 7813.908: 92.1141% ( 17) 00:06:50.441 7813.908 - 7864.320: 92.2085% ( 18) 00:06:50.441 7864.320 - 7914.732: 92.3238% ( 22) 00:06:50.441 7914.732 - 7965.145: 92.4182% ( 18) 00:06:50.441 7965.145 - 8015.557: 92.5073% ( 17) 00:06:50.441 8015.557 - 8065.969: 92.6070% ( 19) 00:06:50.441 8065.969 - 8116.382: 92.7328% ( 24) 00:06:50.441 8116.382 - 8166.794: 92.8586% ( 24) 00:06:50.441 8166.794 - 8217.206: 92.9792% ( 23) 00:06:50.441 8217.206 - 8267.618: 93.0631% ( 16) 00:06:50.441 8267.618 - 8318.031: 93.1890% ( 24) 00:06:50.441 8318.031 - 8368.443: 93.3043% ( 22) 00:06:50.441 8368.443 - 8418.855: 93.4092% ( 20) 00:06:50.441 8418.855 - 8469.268: 93.5193% ( 21) 00:06:50.441 8469.268 - 8519.680: 93.6346% ( 22) 00:06:50.441 8519.680 - 8570.092: 93.7185% ( 16) 00:06:50.441 8570.092 - 8620.505: 93.8129% ( 18) 00:06:50.441 8620.505 - 8670.917: 93.8968% ( 16) 00:06:50.441 8670.917 - 8721.329: 93.9807% ( 16) 00:06:50.441 8721.329 - 8771.742: 94.0489% ( 13) 00:06:50.441 8771.742 - 8822.154: 94.1380% ( 17) 00:06:50.441 8822.154 - 8872.566: 94.2167% ( 15) 00:06:50.441 8872.566 - 8922.978: 94.2901% ( 14) 00:06:50.441 8922.978 - 8973.391: 94.3687% ( 15) 00:06:50.441 8973.391 - 9023.803: 94.4631% ( 18) 00:06:50.441 9023.803 - 9074.215: 94.5365% ( 14) 00:06:50.441 9074.215 - 9124.628: 94.6414% ( 20) 00:06:50.441 9124.628 - 9175.040: 94.7253% ( 16) 00:06:50.441 9175.040 - 9225.452: 94.8144% ( 17) 00:06:50.441 9225.452 - 9275.865: 94.8773% ( 12) 00:06:50.441 9275.865 - 9326.277: 94.9612% ( 16) 00:06:50.441 9326.277 - 9376.689: 95.0294% ( 13) 00:06:50.441 9376.689 - 9427.102: 95.0870% ( 11) 00:06:50.441 9427.102 - 9477.514: 95.1657% ( 15) 00:06:50.441 9477.514 - 9527.926: 95.2548% ( 17) 00:06:50.441 9527.926 - 9578.338: 95.3177% ( 12) 00:06:50.441 9578.338 - 9628.751: 95.4174% ( 19) 00:06:50.441 9628.751 - 9679.163: 95.5013% ( 16) 00:06:50.441 9679.163 - 9729.575: 95.5956% ( 18) 00:06:50.441 9729.575 - 9779.988: 95.6638% ( 13) 00:06:50.441 9779.988 - 9830.400: 95.7215% ( 11) 00:06:50.441 9830.400 - 9880.812: 95.7949% ( 14) 00:06:50.441 9880.812 - 9931.225: 95.8630% ( 13) 00:06:50.441 9931.225 - 9981.637: 95.9155% ( 10) 00:06:50.441 9981.637 - 10032.049: 95.9836% ( 13) 00:06:50.441 10032.049 - 10082.462: 96.0256% ( 8) 00:06:50.441 10082.462 - 10132.874: 96.0833% ( 11) 00:06:50.441 10132.874 - 10183.286: 96.1462% ( 12) 00:06:50.441 10183.286 - 10233.698: 96.2039% ( 11) 00:06:50.441 10233.698 - 10284.111: 96.2930% ( 17) 00:06:50.441 10284.111 - 10334.523: 96.3874% ( 18) 00:06:50.441 10334.523 - 10384.935: 96.4870% ( 19) 00:06:50.441 10384.935 - 10435.348: 96.5866% ( 19) 00:06:50.441 10435.348 - 10485.760: 96.7072% ( 23) 00:06:50.441 10485.760 - 10536.172: 96.8016% ( 18) 00:06:50.441 10536.172 - 10586.585: 96.8750% ( 14) 00:06:50.441 10586.585 - 10636.997: 96.9484% ( 14) 00:06:50.442 10636.997 - 10687.409: 97.0375% ( 17) 00:06:50.442 10687.409 - 10737.822: 97.1214% ( 16) 00:06:50.442 10737.822 - 10788.234: 97.2211% ( 19) 00:06:50.442 10788.234 - 10838.646: 97.2892% ( 13) 00:06:50.442 10838.646 - 10889.058: 97.3888% ( 19) 00:06:50.442 10889.058 - 10939.471: 97.4727% ( 16) 00:06:50.442 10939.471 - 10989.883: 97.5881% ( 22) 00:06:50.442 10989.883 - 11040.295: 97.6667% ( 15) 00:06:50.442 11040.295 - 11090.708: 97.7873% ( 23) 00:06:50.442 11090.708 - 11141.120: 97.8712% ( 16) 00:06:50.442 11141.120 - 11191.532: 97.9918% ( 23) 00:06:50.442 11191.532 - 11241.945: 98.1019% ( 21) 00:06:50.442 11241.945 - 11292.357: 98.1753% ( 14) 00:06:50.442 11292.357 - 11342.769: 98.2697% ( 18) 00:06:50.442 11342.769 - 11393.182: 98.3536% ( 16) 00:06:50.442 11393.182 - 11443.594: 98.4427% ( 17) 00:06:50.442 11443.594 - 11494.006: 98.5004% ( 11) 00:06:50.442 11494.006 - 11544.418: 98.5843% ( 16) 00:06:50.442 11544.418 - 11594.831: 98.6472% ( 12) 00:06:50.442 11594.831 - 11645.243: 98.7259% ( 15) 00:06:50.442 11645.243 - 11695.655: 98.7626% ( 7) 00:06:50.442 11695.655 - 11746.068: 98.8098% ( 9) 00:06:50.442 11746.068 - 11796.480: 98.8622% ( 10) 00:06:50.442 11796.480 - 11846.892: 98.8989% ( 7) 00:06:50.442 11846.892 - 11897.305: 98.9461% ( 9) 00:06:50.442 11897.305 - 11947.717: 98.9776% ( 6) 00:06:50.442 11947.717 - 11998.129: 98.9985% ( 4) 00:06:50.442 11998.129 - 12048.542: 99.0300% ( 6) 00:06:50.442 12048.542 - 12098.954: 99.0562% ( 5) 00:06:50.442 12098.954 - 12149.366: 99.0772% ( 4) 00:06:50.442 12149.366 - 12199.778: 99.1086% ( 6) 00:06:50.442 12199.778 - 12250.191: 99.1296% ( 4) 00:06:50.442 12250.191 - 12300.603: 99.1558% ( 5) 00:06:50.442 12300.603 - 12351.015: 99.1820% ( 5) 00:06:50.442 12351.015 - 12401.428: 99.1925% ( 2) 00:06:50.442 12401.428 - 12451.840: 99.2188% ( 5) 00:06:50.442 12451.840 - 12502.252: 99.2397% ( 4) 00:06:50.442 12502.252 - 12552.665: 99.2502% ( 2) 00:06:50.442 12552.665 - 12603.077: 99.2712% ( 4) 00:06:50.442 12603.077 - 12653.489: 99.2817% ( 2) 00:06:50.442 12653.489 - 12703.902: 99.2869% ( 1) 00:06:50.442 12703.902 - 12754.314: 99.2974% ( 2) 00:06:50.442 12754.314 - 12804.726: 99.3079% ( 2) 00:06:50.442 12804.726 - 12855.138: 99.3184% ( 2) 00:06:50.442 12855.138 - 12905.551: 99.3289% ( 2) 00:06:50.442 26214.400 - 26416.049: 99.3446% ( 3) 00:06:50.442 26416.049 - 26617.698: 99.3865% ( 8) 00:06:50.442 26617.698 - 26819.348: 99.4285% ( 8) 00:06:50.442 26819.348 - 27020.997: 99.4704% ( 8) 00:06:50.442 27020.997 - 27222.646: 99.5124% ( 8) 00:06:50.442 27222.646 - 27424.295: 99.5596% ( 9) 00:06:50.442 27424.295 - 27625.945: 99.5963% ( 7) 00:06:50.442 27625.945 - 27827.594: 99.6382% ( 8) 00:06:50.442 27827.594 - 28029.243: 99.6644% ( 5) 00:06:50.442 30650.683 - 30852.332: 99.6697% ( 1) 00:06:50.442 30852.332 - 31053.982: 99.7064% ( 7) 00:06:50.442 31053.982 - 31255.631: 99.7483% ( 8) 00:06:50.442 31255.631 - 31457.280: 99.7955% ( 9) 00:06:50.442 31457.280 - 31658.929: 99.8375% ( 8) 00:06:50.442 31658.929 - 31860.578: 99.8742% ( 7) 00:06:50.442 31860.578 - 32062.228: 99.9214% ( 9) 00:06:50.442 32062.228 - 32263.877: 99.9633% ( 8) 00:06:50.442 32263.877 - 32465.526: 100.0000% ( 7) 00:06:50.442 00:06:50.442 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:50.442 ============================================================================== 00:06:50.442 Range in us Cumulative IO count 00:06:50.442 5646.178 - 5671.385: 0.0262% ( 5) 00:06:50.442 5671.385 - 5696.591: 0.0315% ( 1) 00:06:50.442 5696.591 - 5721.797: 0.0839% ( 10) 00:06:50.442 5721.797 - 5747.003: 0.1730% ( 17) 00:06:50.442 5747.003 - 5772.209: 0.4299% ( 49) 00:06:50.442 5772.209 - 5797.415: 0.8547% ( 81) 00:06:50.442 5797.415 - 5822.622: 1.6674% ( 155) 00:06:50.442 5822.622 - 5847.828: 2.6898% ( 195) 00:06:50.442 5847.828 - 5873.034: 4.1632% ( 281) 00:06:50.442 5873.034 - 5898.240: 6.0508% ( 360) 00:06:50.442 5898.240 - 5923.446: 8.0117% ( 374) 00:06:50.442 5923.446 - 5948.652: 10.3293% ( 442) 00:06:50.442 5948.652 - 5973.858: 12.7359% ( 459) 00:06:50.442 5973.858 - 5999.065: 15.0954% ( 450) 00:06:50.442 5999.065 - 6024.271: 17.5126% ( 461) 00:06:50.442 6024.271 - 6049.477: 19.9455% ( 464) 00:06:50.442 6049.477 - 6074.683: 22.3417% ( 457) 00:06:50.442 6074.683 - 6099.889: 24.7641% ( 462) 00:06:50.442 6099.889 - 6125.095: 27.2599% ( 476) 00:06:50.442 6125.095 - 6150.302: 29.7085% ( 467) 00:06:50.442 6150.302 - 6175.508: 32.2095% ( 477) 00:06:50.442 6175.508 - 6200.714: 34.7158% ( 478) 00:06:50.442 6200.714 - 6225.920: 37.3689% ( 506) 00:06:50.442 6225.920 - 6251.126: 39.9224% ( 487) 00:06:50.442 6251.126 - 6276.332: 42.4916% ( 490) 00:06:50.442 6276.332 - 6301.538: 44.9979% ( 478) 00:06:50.442 6301.538 - 6326.745: 47.5986% ( 496) 00:06:50.442 6326.745 - 6351.951: 50.2202% ( 500) 00:06:50.442 6351.951 - 6377.157: 52.8419% ( 500) 00:06:50.442 6377.157 - 6402.363: 55.4216% ( 492) 00:06:50.442 6402.363 - 6427.569: 58.0380% ( 499) 00:06:50.442 6427.569 - 6452.775: 60.5757% ( 484) 00:06:50.442 6452.775 - 6503.188: 65.7194% ( 981) 00:06:50.442 6503.188 - 6553.600: 70.7844% ( 966) 00:06:50.442 6553.600 - 6604.012: 75.8180% ( 960) 00:06:50.442 6604.012 - 6654.425: 80.3324% ( 861) 00:06:50.442 6654.425 - 6704.837: 83.7196% ( 646) 00:06:50.442 6704.837 - 6755.249: 85.8326% ( 403) 00:06:50.442 6755.249 - 6805.662: 86.9337% ( 210) 00:06:50.442 6805.662 - 6856.074: 87.6625% ( 139) 00:06:50.442 6856.074 - 6906.486: 88.2393% ( 110) 00:06:50.442 6906.486 - 6956.898: 88.8108% ( 109) 00:06:50.442 6956.898 - 7007.311: 89.2775% ( 89) 00:06:50.442 7007.311 - 7057.723: 89.6760% ( 76) 00:06:50.442 7057.723 - 7108.135: 89.9906% ( 60) 00:06:50.442 7108.135 - 7158.548: 90.2580% ( 51) 00:06:50.442 7158.548 - 7208.960: 90.4782% ( 42) 00:06:50.442 7208.960 - 7259.372: 90.6355% ( 30) 00:06:50.442 7259.372 - 7309.785: 90.7980% ( 31) 00:06:50.442 7309.785 - 7360.197: 90.9553% ( 30) 00:06:50.442 7360.197 - 7410.609: 91.1074% ( 29) 00:06:50.442 7410.609 - 7461.022: 91.2594% ( 29) 00:06:50.442 7461.022 - 7511.434: 91.4062% ( 28) 00:06:50.442 7511.434 - 7561.846: 91.5164% ( 21) 00:06:50.442 7561.846 - 7612.258: 91.6160% ( 19) 00:06:50.442 7612.258 - 7662.671: 91.6894% ( 14) 00:06:50.442 7662.671 - 7713.083: 91.8100% ( 23) 00:06:50.442 7713.083 - 7763.495: 91.9096% ( 19) 00:06:50.442 7763.495 - 7813.908: 91.9987% ( 17) 00:06:50.442 7813.908 - 7864.320: 92.0879% ( 17) 00:06:50.442 7864.320 - 7914.732: 92.1823% ( 18) 00:06:50.442 7914.732 - 7965.145: 92.2976% ( 22) 00:06:50.442 7965.145 - 8015.557: 92.4130% ( 22) 00:06:50.442 8015.557 - 8065.969: 92.5336% ( 23) 00:06:50.442 8065.969 - 8116.382: 92.6384% ( 20) 00:06:50.442 8116.382 - 8166.794: 92.7485% ( 21) 00:06:50.442 8166.794 - 8217.206: 92.8691% ( 23) 00:06:50.442 8217.206 - 8267.618: 92.9792% ( 21) 00:06:50.442 8267.618 - 8318.031: 93.0998% ( 23) 00:06:50.442 8318.031 - 8368.443: 93.2362% ( 26) 00:06:50.442 8368.443 - 8418.855: 93.3620% ( 24) 00:06:50.442 8418.855 - 8469.268: 93.4773% ( 22) 00:06:50.442 8469.268 - 8519.680: 93.5927% ( 22) 00:06:50.442 8519.680 - 8570.092: 93.6976% ( 20) 00:06:50.442 8570.092 - 8620.505: 93.7762% ( 15) 00:06:50.442 8620.505 - 8670.917: 93.8758% ( 19) 00:06:50.442 8670.917 - 8721.329: 93.9702% ( 18) 00:06:50.442 8721.329 - 8771.742: 94.0594% ( 17) 00:06:50.442 8771.742 - 8822.154: 94.1485% ( 17) 00:06:50.442 8822.154 - 8872.566: 94.2376% ( 17) 00:06:50.442 8872.566 - 8922.978: 94.3268% ( 17) 00:06:50.442 8922.978 - 8973.391: 94.4002% ( 14) 00:06:50.442 8973.391 - 9023.803: 94.4736% ( 14) 00:06:50.442 9023.803 - 9074.215: 94.5312% ( 11) 00:06:50.442 9074.215 - 9124.628: 94.5732% ( 8) 00:06:50.442 9124.628 - 9175.040: 94.6518% ( 15) 00:06:50.442 9175.040 - 9225.452: 94.7148% ( 12) 00:06:50.442 9225.452 - 9275.865: 94.7672% ( 10) 00:06:50.442 9275.865 - 9326.277: 94.8039% ( 7) 00:06:50.442 9326.277 - 9376.689: 94.8721% ( 13) 00:06:50.442 9376.689 - 9427.102: 94.9297% ( 11) 00:06:50.442 9427.102 - 9477.514: 94.9822% ( 10) 00:06:50.442 9477.514 - 9527.926: 95.0346% ( 10) 00:06:50.442 9527.926 - 9578.338: 95.0870% ( 10) 00:06:50.442 9578.338 - 9628.751: 95.1395% ( 10) 00:06:50.442 9628.751 - 9679.163: 95.1971% ( 11) 00:06:50.442 9679.163 - 9729.575: 95.2706% ( 14) 00:06:50.442 9729.575 - 9779.988: 95.3440% ( 14) 00:06:50.442 9779.988 - 9830.400: 95.5117% ( 32) 00:06:50.442 9830.400 - 9880.812: 95.6481% ( 26) 00:06:50.442 9880.812 - 9931.225: 95.7529% ( 20) 00:06:50.442 9931.225 - 9981.637: 95.8473% ( 18) 00:06:50.442 9981.637 - 10032.049: 95.9522% ( 20) 00:06:50.442 10032.049 - 10082.462: 96.0256% ( 14) 00:06:50.442 10082.462 - 10132.874: 96.1252% ( 19) 00:06:50.442 10132.874 - 10183.286: 96.2196% ( 18) 00:06:50.442 10183.286 - 10233.698: 96.3087% ( 17) 00:06:50.442 10233.698 - 10284.111: 96.3926% ( 16) 00:06:50.442 10284.111 - 10334.523: 96.4713% ( 15) 00:06:50.442 10334.523 - 10384.935: 96.5499% ( 15) 00:06:50.442 10384.935 - 10435.348: 96.6653% ( 22) 00:06:50.442 10435.348 - 10485.760: 96.7596% ( 18) 00:06:50.442 10485.760 - 10536.172: 96.8435% ( 16) 00:06:50.442 10536.172 - 10586.585: 96.9904% ( 28) 00:06:50.442 10586.585 - 10636.997: 97.0900% ( 19) 00:06:50.442 10636.997 - 10687.409: 97.2053% ( 22) 00:06:50.442 10687.409 - 10737.822: 97.3259% ( 23) 00:06:50.442 10737.822 - 10788.234: 97.4465% ( 23) 00:06:50.443 10788.234 - 10838.646: 97.5514% ( 20) 00:06:50.443 10838.646 - 10889.058: 97.6458% ( 18) 00:06:50.443 10889.058 - 10939.471: 97.7664% ( 23) 00:06:50.443 10939.471 - 10989.883: 97.8765% ( 21) 00:06:50.443 10989.883 - 11040.295: 97.9813% ( 20) 00:06:50.443 11040.295 - 11090.708: 98.1124% ( 25) 00:06:50.443 11090.708 - 11141.120: 98.1963% ( 16) 00:06:50.443 11141.120 - 11191.532: 98.2750% ( 15) 00:06:50.443 11191.532 - 11241.945: 98.3379% ( 12) 00:06:50.443 11241.945 - 11292.357: 98.3956% ( 11) 00:06:50.443 11292.357 - 11342.769: 98.4585% ( 12) 00:06:50.443 11342.769 - 11393.182: 98.5057% ( 9) 00:06:50.443 11393.182 - 11443.594: 98.5476% ( 8) 00:06:50.443 11443.594 - 11494.006: 98.5633% ( 3) 00:06:50.443 11494.006 - 11544.418: 98.5843% ( 4) 00:06:50.443 11544.418 - 11594.831: 98.6053% ( 4) 00:06:50.443 11594.831 - 11645.243: 98.6263% ( 4) 00:06:50.443 11645.243 - 11695.655: 98.6472% ( 4) 00:06:50.443 11695.655 - 11746.068: 98.6630% ( 3) 00:06:50.443 11746.068 - 11796.480: 98.6944% ( 6) 00:06:50.443 11796.480 - 11846.892: 98.7102% ( 3) 00:06:50.443 11846.892 - 11897.305: 98.7154% ( 1) 00:06:50.443 11897.305 - 11947.717: 98.7206% ( 1) 00:06:50.443 11947.717 - 11998.129: 98.7364% ( 3) 00:06:50.443 11998.129 - 12048.542: 98.7469% ( 2) 00:06:50.443 12048.542 - 12098.954: 98.8203% ( 14) 00:06:50.443 12098.954 - 12149.366: 98.8517% ( 6) 00:06:50.443 12149.366 - 12199.778: 98.8727% ( 4) 00:06:50.443 12199.778 - 12250.191: 98.8989% ( 5) 00:06:50.443 12250.191 - 12300.603: 98.9304% ( 6) 00:06:50.443 12300.603 - 12351.015: 98.9671% ( 7) 00:06:50.443 12351.015 - 12401.428: 98.9985% ( 6) 00:06:50.443 12401.428 - 12451.840: 99.0300% ( 6) 00:06:50.443 12451.840 - 12502.252: 99.0667% ( 7) 00:06:50.443 12502.252 - 12552.665: 99.0982% ( 6) 00:06:50.443 12552.665 - 12603.077: 99.1349% ( 7) 00:06:50.443 12603.077 - 12653.489: 99.1611% ( 5) 00:06:50.443 12653.489 - 12703.902: 99.1925% ( 6) 00:06:50.443 12703.902 - 12754.314: 99.2240% ( 6) 00:06:50.443 12754.314 - 12804.726: 99.2555% ( 6) 00:06:50.443 12804.726 - 12855.138: 99.2764% ( 4) 00:06:50.443 12855.138 - 12905.551: 99.2922% ( 3) 00:06:50.443 12905.551 - 13006.375: 99.3131% ( 4) 00:06:50.443 13006.375 - 13107.200: 99.3289% ( 3) 00:06:50.443 24702.031 - 24802.855: 99.3498% ( 4) 00:06:50.443 24802.855 - 24903.680: 99.3708% ( 4) 00:06:50.443 24903.680 - 25004.505: 99.3918% ( 4) 00:06:50.443 25004.505 - 25105.329: 99.4180% ( 5) 00:06:50.443 25105.329 - 25206.154: 99.4337% ( 3) 00:06:50.443 25206.154 - 25306.978: 99.4599% ( 5) 00:06:50.443 25306.978 - 25407.803: 99.4809% ( 4) 00:06:50.443 25407.803 - 25508.628: 99.5071% ( 5) 00:06:50.443 25508.628 - 25609.452: 99.5281% ( 4) 00:06:50.443 25609.452 - 25710.277: 99.5491% ( 4) 00:06:50.443 25710.277 - 25811.102: 99.5701% ( 4) 00:06:50.443 25811.102 - 26012.751: 99.6172% ( 9) 00:06:50.443 26012.751 - 26214.400: 99.6644% ( 9) 00:06:50.443 29037.489 - 29239.138: 99.6749% ( 2) 00:06:50.443 29239.138 - 29440.788: 99.7169% ( 8) 00:06:50.443 29440.788 - 29642.437: 99.7588% ( 8) 00:06:50.443 29642.437 - 29844.086: 99.8060% ( 9) 00:06:50.443 29844.086 - 30045.735: 99.8532% ( 9) 00:06:50.443 30045.735 - 30247.385: 99.8951% ( 8) 00:06:50.443 30247.385 - 30449.034: 99.9423% ( 9) 00:06:50.443 30449.034 - 30650.683: 99.9895% ( 9) 00:06:50.443 30650.683 - 30852.332: 100.0000% ( 2) 00:06:50.443 00:06:50.443 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:50.443 ============================================================================== 00:06:50.443 Range in us Cumulative IO count 00:06:50.443 5696.591 - 5721.797: 0.0262% ( 5) 00:06:50.443 5721.797 - 5747.003: 0.1101% ( 16) 00:06:50.443 5747.003 - 5772.209: 0.3146% ( 39) 00:06:50.443 5772.209 - 5797.415: 0.8180% ( 96) 00:06:50.443 5797.415 - 5822.622: 1.6464% ( 158) 00:06:50.443 5822.622 - 5847.828: 2.8995% ( 239) 00:06:50.443 5847.828 - 5873.034: 4.5354% ( 312) 00:06:50.443 5873.034 - 5898.240: 6.2395% ( 325) 00:06:50.443 5898.240 - 5923.446: 8.3316% ( 399) 00:06:50.443 5923.446 - 5948.652: 10.5023% ( 414) 00:06:50.443 5948.652 - 5973.858: 12.9404% ( 465) 00:06:50.443 5973.858 - 5999.065: 15.4467% ( 478) 00:06:50.443 5999.065 - 6024.271: 17.9111% ( 470) 00:06:50.443 6024.271 - 6049.477: 20.3807% ( 471) 00:06:50.443 6049.477 - 6074.683: 22.8188% ( 465) 00:06:50.443 6074.683 - 6099.889: 25.2517% ( 464) 00:06:50.443 6099.889 - 6125.095: 27.7580% ( 478) 00:06:50.443 6125.095 - 6150.302: 30.1751% ( 461) 00:06:50.443 6150.302 - 6175.508: 32.5870% ( 460) 00:06:50.443 6175.508 - 6200.714: 35.1458% ( 488) 00:06:50.443 6200.714 - 6225.920: 37.5786% ( 464) 00:06:50.443 6225.920 - 6251.126: 40.1059% ( 482) 00:06:50.443 6251.126 - 6276.332: 42.6961% ( 494) 00:06:50.443 6276.332 - 6301.538: 45.3177% ( 500) 00:06:50.443 6301.538 - 6326.745: 47.8922% ( 491) 00:06:50.443 6326.745 - 6351.951: 50.4614% ( 490) 00:06:50.443 6351.951 - 6377.157: 53.0568% ( 495) 00:06:50.443 6377.157 - 6402.363: 55.6470% ( 494) 00:06:50.443 6402.363 - 6427.569: 58.2057% ( 488) 00:06:50.443 6427.569 - 6452.775: 60.7120% ( 478) 00:06:50.443 6452.775 - 6503.188: 65.9029% ( 990) 00:06:50.443 6503.188 - 6553.600: 71.0675% ( 985) 00:06:50.443 6553.600 - 6604.012: 76.1011% ( 960) 00:06:50.443 6604.012 - 6654.425: 80.6785% ( 873) 00:06:50.443 6654.425 - 6704.837: 83.8874% ( 612) 00:06:50.443 6704.837 - 6755.249: 86.0004% ( 403) 00:06:50.443 6755.249 - 6805.662: 87.0648% ( 203) 00:06:50.443 6805.662 - 6856.074: 87.8041% ( 141) 00:06:50.443 6856.074 - 6906.486: 88.3809% ( 110) 00:06:50.443 6906.486 - 6956.898: 88.8423% ( 88) 00:06:50.443 6956.898 - 7007.311: 89.2722% ( 82) 00:06:50.443 7007.311 - 7057.723: 89.6288% ( 68) 00:06:50.443 7057.723 - 7108.135: 89.9591% ( 63) 00:06:50.443 7108.135 - 7158.548: 90.2003% ( 46) 00:06:50.443 7158.548 - 7208.960: 90.3891% ( 36) 00:06:50.443 7208.960 - 7259.372: 90.5568% ( 32) 00:06:50.443 7259.372 - 7309.785: 90.7246% ( 32) 00:06:50.443 7309.785 - 7360.197: 90.9081% ( 35) 00:06:50.443 7360.197 - 7410.609: 91.0969% ( 36) 00:06:50.443 7410.609 - 7461.022: 91.2699% ( 33) 00:06:50.443 7461.022 - 7511.434: 91.4272% ( 30) 00:06:50.443 7511.434 - 7561.846: 91.5898% ( 31) 00:06:50.443 7561.846 - 7612.258: 91.6894% ( 19) 00:06:50.443 7612.258 - 7662.671: 91.7943% ( 20) 00:06:50.443 7662.671 - 7713.083: 91.9044% ( 21) 00:06:50.443 7713.083 - 7763.495: 91.9987% ( 18) 00:06:50.443 7763.495 - 7813.908: 92.0984% ( 19) 00:06:50.443 7813.908 - 7864.320: 92.2032% ( 20) 00:06:50.443 7864.320 - 7914.732: 92.3029% ( 19) 00:06:50.443 7914.732 - 7965.145: 92.3972% ( 18) 00:06:50.443 7965.145 - 8015.557: 92.4969% ( 19) 00:06:50.443 8015.557 - 8065.969: 92.6122% ( 22) 00:06:50.443 8065.969 - 8116.382: 92.7171% ( 20) 00:06:50.443 8116.382 - 8166.794: 92.8377% ( 23) 00:06:50.443 8166.794 - 8217.206: 92.9635% ( 24) 00:06:50.443 8217.206 - 8267.618: 93.0579% ( 18) 00:06:50.443 8267.618 - 8318.031: 93.1732% ( 22) 00:06:50.443 8318.031 - 8368.443: 93.2729% ( 19) 00:06:50.443 8368.443 - 8418.855: 93.3672% ( 18) 00:06:50.443 8418.855 - 8469.268: 93.4564% ( 17) 00:06:50.443 8469.268 - 8519.680: 93.5612% ( 20) 00:06:50.443 8519.680 - 8570.092: 93.6399% ( 15) 00:06:50.443 8570.092 - 8620.505: 93.7290% ( 17) 00:06:50.443 8620.505 - 8670.917: 93.8182% ( 17) 00:06:50.443 8670.917 - 8721.329: 93.8968% ( 15) 00:06:50.443 8721.329 - 8771.742: 93.9755% ( 15) 00:06:50.443 8771.742 - 8822.154: 94.0384% ( 12) 00:06:50.443 8822.154 - 8872.566: 94.1065% ( 13) 00:06:50.443 8872.566 - 8922.978: 94.1695% ( 12) 00:06:50.443 8922.978 - 8973.391: 94.2376% ( 13) 00:06:50.443 8973.391 - 9023.803: 94.3005% ( 12) 00:06:50.443 9023.803 - 9074.215: 94.3792% ( 15) 00:06:50.443 9074.215 - 9124.628: 94.4421% ( 12) 00:06:50.443 9124.628 - 9175.040: 94.5208% ( 15) 00:06:50.443 9175.040 - 9225.452: 94.5732% ( 10) 00:06:50.443 9225.452 - 9275.865: 94.6256% ( 10) 00:06:50.443 9275.865 - 9326.277: 94.6833% ( 11) 00:06:50.443 9326.277 - 9376.689: 94.7462% ( 12) 00:06:50.443 9376.689 - 9427.102: 94.7987% ( 10) 00:06:50.443 9427.102 - 9477.514: 94.8616% ( 12) 00:06:50.443 9477.514 - 9527.926: 94.9507% ( 17) 00:06:50.443 9527.926 - 9578.338: 95.0398% ( 17) 00:06:50.443 9578.338 - 9628.751: 95.1342% ( 18) 00:06:50.443 9628.751 - 9679.163: 95.2548% ( 23) 00:06:50.443 9679.163 - 9729.575: 95.3649% ( 21) 00:06:50.443 9729.575 - 9779.988: 95.4803% ( 22) 00:06:50.443 9779.988 - 9830.400: 95.5799% ( 19) 00:06:50.443 9830.400 - 9880.812: 95.6428% ( 12) 00:06:50.443 9880.812 - 9931.225: 95.7477% ( 20) 00:06:50.443 9931.225 - 9981.637: 95.8263% ( 15) 00:06:50.443 9981.637 - 10032.049: 95.9102% ( 16) 00:06:50.443 10032.049 - 10082.462: 95.9994% ( 17) 00:06:50.443 10082.462 - 10132.874: 96.1042% ( 20) 00:06:50.443 10132.874 - 10183.286: 96.1934% ( 17) 00:06:50.443 10183.286 - 10233.698: 96.2930% ( 19) 00:06:50.443 10233.698 - 10284.111: 96.3821% ( 17) 00:06:50.443 10284.111 - 10334.523: 96.4451% ( 12) 00:06:50.443 10334.523 - 10384.935: 96.5289% ( 16) 00:06:50.443 10384.935 - 10435.348: 96.6023% ( 14) 00:06:50.443 10435.348 - 10485.760: 96.7020% ( 19) 00:06:50.443 10485.760 - 10536.172: 96.8226% ( 23) 00:06:50.443 10536.172 - 10586.585: 96.9379% ( 22) 00:06:50.443 10586.585 - 10636.997: 97.0638% ( 24) 00:06:50.443 10636.997 - 10687.409: 97.1896% ( 24) 00:06:50.443 10687.409 - 10737.822: 97.3049% ( 22) 00:06:50.443 10737.822 - 10788.234: 97.4308% ( 24) 00:06:50.444 10788.234 - 10838.646: 97.5619% ( 25) 00:06:50.444 10838.646 - 10889.058: 97.6982% ( 26) 00:06:50.444 10889.058 - 10939.471: 97.8135% ( 22) 00:06:50.444 10939.471 - 10989.883: 97.9341% ( 23) 00:06:50.444 10989.883 - 11040.295: 98.0390% ( 20) 00:06:50.444 11040.295 - 11090.708: 98.1491% ( 21) 00:06:50.444 11090.708 - 11141.120: 98.2330% ( 16) 00:06:50.444 11141.120 - 11191.532: 98.3169% ( 16) 00:06:50.444 11191.532 - 11241.945: 98.3903% ( 14) 00:06:50.444 11241.945 - 11292.357: 98.4480% ( 11) 00:06:50.444 11292.357 - 11342.769: 98.5004% ( 10) 00:06:50.444 11342.769 - 11393.182: 98.5266% ( 5) 00:06:50.444 11393.182 - 11443.594: 98.5476% ( 4) 00:06:50.444 11443.594 - 11494.006: 98.5633% ( 3) 00:06:50.444 11494.006 - 11544.418: 98.5843% ( 4) 00:06:50.444 11544.418 - 11594.831: 98.6000% ( 3) 00:06:50.444 11594.831 - 11645.243: 98.6210% ( 4) 00:06:50.444 11645.243 - 11695.655: 98.6367% ( 3) 00:06:50.444 11695.655 - 11746.068: 98.6525% ( 3) 00:06:50.444 11746.068 - 11796.480: 98.6577% ( 1) 00:06:50.444 11947.717 - 11998.129: 98.6734% ( 3) 00:06:50.444 11998.129 - 12048.542: 98.6997% ( 5) 00:06:50.444 12048.542 - 12098.954: 98.7049% ( 1) 00:06:50.444 12098.954 - 12149.366: 98.7311% ( 5) 00:06:50.444 12149.366 - 12199.778: 98.7573% ( 5) 00:06:50.444 12199.778 - 12250.191: 98.7888% ( 6) 00:06:50.444 12250.191 - 12300.603: 98.8255% ( 7) 00:06:50.444 12300.603 - 12351.015: 98.8622% ( 7) 00:06:50.444 12351.015 - 12401.428: 98.8937% ( 6) 00:06:50.444 12401.428 - 12451.840: 98.9304% ( 7) 00:06:50.444 12451.840 - 12502.252: 98.9618% ( 6) 00:06:50.444 12502.252 - 12552.665: 98.9880% ( 5) 00:06:50.444 12552.665 - 12603.077: 99.0247% ( 7) 00:06:50.444 12603.077 - 12653.489: 99.0562% ( 6) 00:06:50.444 12653.489 - 12703.902: 99.0877% ( 6) 00:06:50.444 12703.902 - 12754.314: 99.1191% ( 6) 00:06:50.444 12754.314 - 12804.726: 99.1506% ( 6) 00:06:50.444 12804.726 - 12855.138: 99.1820% ( 6) 00:06:50.444 12855.138 - 12905.551: 99.2135% ( 6) 00:06:50.444 12905.551 - 13006.375: 99.2607% ( 9) 00:06:50.444 13006.375 - 13107.200: 99.2869% ( 5) 00:06:50.444 13107.200 - 13208.025: 99.3131% ( 5) 00:06:50.444 13208.025 - 13308.849: 99.3289% ( 3) 00:06:50.444 23290.486 - 23391.311: 99.3446% ( 3) 00:06:50.444 23391.311 - 23492.135: 99.3656% ( 4) 00:06:50.444 23492.135 - 23592.960: 99.3865% ( 4) 00:06:50.444 23592.960 - 23693.785: 99.4128% ( 5) 00:06:50.444 23693.785 - 23794.609: 99.4337% ( 4) 00:06:50.444 23794.609 - 23895.434: 99.4442% ( 2) 00:06:50.444 23895.434 - 23996.258: 99.4652% ( 4) 00:06:50.444 23996.258 - 24097.083: 99.4862% ( 4) 00:06:50.444 24097.083 - 24197.908: 99.5071% ( 4) 00:06:50.444 24197.908 - 24298.732: 99.5281% ( 4) 00:06:50.444 24298.732 - 24399.557: 99.5491% ( 4) 00:06:50.444 24399.557 - 24500.382: 99.5701% ( 4) 00:06:50.444 24500.382 - 24601.206: 99.5910% ( 4) 00:06:50.444 24601.206 - 24702.031: 99.6172% ( 5) 00:06:50.444 24702.031 - 24802.855: 99.6382% ( 4) 00:06:50.444 24802.855 - 24903.680: 99.6592% ( 4) 00:06:50.444 24903.680 - 25004.505: 99.6644% ( 1) 00:06:50.444 27625.945 - 27827.594: 99.6749% ( 2) 00:06:50.444 27827.594 - 28029.243: 99.7169% ( 8) 00:06:50.444 28029.243 - 28230.892: 99.7536% ( 7) 00:06:50.444 28230.892 - 28432.542: 99.7955% ( 8) 00:06:50.444 28432.542 - 28634.191: 99.8427% ( 9) 00:06:50.444 28634.191 - 28835.840: 99.8846% ( 8) 00:06:50.444 28835.840 - 29037.489: 99.9266% ( 8) 00:06:50.444 29037.489 - 29239.138: 99.9738% ( 9) 00:06:50.444 29239.138 - 29440.788: 100.0000% ( 5) 00:06:50.444 00:06:50.444 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:50.444 ============================================================================== 00:06:50.444 Range in us Cumulative IO count 00:06:50.444 5696.591 - 5721.797: 0.0262% ( 5) 00:06:50.444 5721.797 - 5747.003: 0.0891% ( 12) 00:06:50.444 5747.003 - 5772.209: 0.2779% ( 36) 00:06:50.444 5772.209 - 5797.415: 0.7078% ( 82) 00:06:50.444 5797.415 - 5822.622: 1.3790% ( 128) 00:06:50.444 5822.622 - 5847.828: 2.4276% ( 200) 00:06:50.444 5847.828 - 5873.034: 3.9744% ( 295) 00:06:50.444 5873.034 - 5898.240: 5.8672% ( 361) 00:06:50.444 5898.240 - 5923.446: 7.9069% ( 389) 00:06:50.444 5923.446 - 5948.652: 10.1615% ( 430) 00:06:50.444 5948.652 - 5973.858: 12.6363% ( 472) 00:06:50.444 5973.858 - 5999.065: 14.9906% ( 449) 00:06:50.444 5999.065 - 6024.271: 17.4077% ( 461) 00:06:50.444 6024.271 - 6049.477: 19.7462% ( 446) 00:06:50.444 6049.477 - 6074.683: 22.1424% ( 457) 00:06:50.444 6074.683 - 6099.889: 24.6487% ( 478) 00:06:50.444 6099.889 - 6125.095: 27.2022% ( 487) 00:06:50.444 6125.095 - 6150.302: 29.7452% ( 485) 00:06:50.444 6150.302 - 6175.508: 32.3249% ( 492) 00:06:50.444 6175.508 - 6200.714: 34.8312% ( 478) 00:06:50.444 6200.714 - 6225.920: 37.3479% ( 480) 00:06:50.444 6225.920 - 6251.126: 39.8909% ( 485) 00:06:50.444 6251.126 - 6276.332: 42.4706% ( 492) 00:06:50.444 6276.332 - 6301.538: 45.0294% ( 488) 00:06:50.444 6301.538 - 6326.745: 47.6405% ( 498) 00:06:50.444 6326.745 - 6351.951: 50.1940% ( 487) 00:06:50.444 6351.951 - 6377.157: 52.7422% ( 486) 00:06:50.444 6377.157 - 6402.363: 55.4268% ( 512) 00:06:50.444 6402.363 - 6427.569: 57.9803% ( 487) 00:06:50.444 6427.569 - 6452.775: 60.6386% ( 507) 00:06:50.444 6452.775 - 6503.188: 65.8819% ( 1000) 00:06:50.444 6503.188 - 6553.600: 70.9994% ( 976) 00:06:50.444 6553.600 - 6604.012: 76.0644% ( 966) 00:06:50.444 6604.012 - 6654.425: 80.6260% ( 870) 00:06:50.444 6654.425 - 6704.837: 83.9398% ( 632) 00:06:50.444 6704.837 - 6755.249: 85.9323% ( 380) 00:06:50.444 6755.249 - 6805.662: 86.9704% ( 198) 00:06:50.444 6805.662 - 6856.074: 87.7989% ( 158) 00:06:50.444 6856.074 - 6906.486: 88.4123% ( 117) 00:06:50.444 6906.486 - 6956.898: 88.9576% ( 104) 00:06:50.444 6956.898 - 7007.311: 89.4610% ( 96) 00:06:50.444 7007.311 - 7057.723: 89.8280% ( 70) 00:06:50.444 7057.723 - 7108.135: 90.1688% ( 65) 00:06:50.444 7108.135 - 7158.548: 90.4677% ( 57) 00:06:50.444 7158.548 - 7208.960: 90.6932% ( 43) 00:06:50.444 7208.960 - 7259.372: 90.9029% ( 40) 00:06:50.444 7259.372 - 7309.785: 91.0969% ( 37) 00:06:50.444 7309.785 - 7360.197: 91.2804% ( 35) 00:06:50.444 7360.197 - 7410.609: 91.4482% ( 32) 00:06:50.444 7410.609 - 7461.022: 91.6003% ( 29) 00:06:50.444 7461.022 - 7511.434: 91.7418% ( 27) 00:06:50.444 7511.434 - 7561.846: 91.8834% ( 27) 00:06:50.444 7561.846 - 7612.258: 91.9987% ( 22) 00:06:50.444 7612.258 - 7662.671: 92.1298% ( 25) 00:06:50.444 7662.671 - 7713.083: 92.2242% ( 18) 00:06:50.444 7713.083 - 7763.495: 92.3133% ( 17) 00:06:50.444 7763.495 - 7813.908: 92.3867% ( 14) 00:06:50.444 7813.908 - 7864.320: 92.4864% ( 19) 00:06:50.444 7864.320 - 7914.732: 92.5755% ( 17) 00:06:50.444 7914.732 - 7965.145: 92.6646% ( 17) 00:06:50.444 7965.145 - 8015.557: 92.7695% ( 20) 00:06:50.444 8015.557 - 8065.969: 92.8534% ( 16) 00:06:50.444 8065.969 - 8116.382: 92.9320% ( 15) 00:06:50.444 8116.382 - 8166.794: 93.0002% ( 13) 00:06:50.444 8166.794 - 8217.206: 93.0946% ( 18) 00:06:50.444 8217.206 - 8267.618: 93.1785% ( 16) 00:06:50.444 8267.618 - 8318.031: 93.2624% ( 16) 00:06:50.444 8318.031 - 8368.443: 93.3410% ( 15) 00:06:50.444 8368.443 - 8418.855: 93.4092% ( 13) 00:06:50.444 8418.855 - 8469.268: 93.4773% ( 13) 00:06:50.444 8469.268 - 8519.680: 93.5560% ( 15) 00:06:50.444 8519.680 - 8570.092: 93.6346% ( 15) 00:06:50.444 8570.092 - 8620.505: 93.6976% ( 12) 00:06:50.444 8620.505 - 8670.917: 93.7605% ( 12) 00:06:50.444 8670.917 - 8721.329: 93.8129% ( 10) 00:06:50.444 8721.329 - 8771.742: 93.8706% ( 11) 00:06:50.444 8771.742 - 8822.154: 93.9230% ( 10) 00:06:50.444 8822.154 - 8872.566: 93.9440% ( 4) 00:06:50.444 8872.566 - 8922.978: 93.9807% ( 7) 00:06:50.444 8922.978 - 8973.391: 94.0174% ( 7) 00:06:50.444 8973.391 - 9023.803: 94.0594% ( 8) 00:06:50.444 9023.803 - 9074.215: 94.0961% ( 7) 00:06:50.444 9074.215 - 9124.628: 94.1590% ( 12) 00:06:50.444 9124.628 - 9175.040: 94.2219% ( 12) 00:06:50.444 9175.040 - 9225.452: 94.2848% ( 12) 00:06:50.444 9225.452 - 9275.865: 94.3372% ( 10) 00:06:50.444 9275.865 - 9326.277: 94.3949% ( 11) 00:06:50.444 9326.277 - 9376.689: 94.4736% ( 15) 00:06:50.444 9376.689 - 9427.102: 94.5680% ( 18) 00:06:50.444 9427.102 - 9477.514: 94.6728% ( 20) 00:06:50.444 9477.514 - 9527.926: 94.7724% ( 19) 00:06:50.444 9527.926 - 9578.338: 94.8930% ( 23) 00:06:50.444 9578.338 - 9628.751: 94.9874% ( 18) 00:06:50.444 9628.751 - 9679.163: 95.0766% ( 17) 00:06:50.444 9679.163 - 9729.575: 95.1867% ( 21) 00:06:50.444 9729.575 - 9779.988: 95.2810% ( 18) 00:06:50.444 9779.988 - 9830.400: 95.3911% ( 21) 00:06:50.444 9830.400 - 9880.812: 95.5013% ( 21) 00:06:50.444 9880.812 - 9931.225: 95.6166% ( 22) 00:06:50.444 9931.225 - 9981.637: 95.7372% ( 23) 00:06:50.444 9981.637 - 10032.049: 95.8893% ( 29) 00:06:50.444 10032.049 - 10082.462: 96.0151% ( 24) 00:06:50.444 10082.462 - 10132.874: 96.1462% ( 25) 00:06:50.444 10132.874 - 10183.286: 96.2563% ( 21) 00:06:50.444 10183.286 - 10233.698: 96.3821% ( 24) 00:06:50.444 10233.698 - 10284.111: 96.4765% ( 18) 00:06:50.444 10284.111 - 10334.523: 96.5866% ( 21) 00:06:50.444 10334.523 - 10384.935: 96.6758% ( 17) 00:06:50.445 10384.935 - 10435.348: 96.7701% ( 18) 00:06:50.445 10435.348 - 10485.760: 96.8383% ( 13) 00:06:50.445 10485.760 - 10536.172: 96.9536% ( 22) 00:06:50.445 10536.172 - 10586.585: 97.0585% ( 20) 00:06:50.445 10586.585 - 10636.997: 97.1581% ( 19) 00:06:50.445 10636.997 - 10687.409: 97.2735% ( 22) 00:06:50.445 10687.409 - 10737.822: 97.3626% ( 17) 00:06:50.445 10737.822 - 10788.234: 97.4308% ( 13) 00:06:50.445 10788.234 - 10838.646: 97.4937% ( 12) 00:06:50.445 10838.646 - 10889.058: 97.5828% ( 17) 00:06:50.445 10889.058 - 10939.471: 97.6667% ( 16) 00:06:50.445 10939.471 - 10989.883: 97.7349% ( 13) 00:06:50.445 10989.883 - 11040.295: 97.8345% ( 19) 00:06:50.445 11040.295 - 11090.708: 97.9237% ( 17) 00:06:50.445 11090.708 - 11141.120: 98.0076% ( 16) 00:06:50.445 11141.120 - 11191.532: 98.1124% ( 20) 00:06:50.445 11191.532 - 11241.945: 98.2016% ( 17) 00:06:50.445 11241.945 - 11292.357: 98.2907% ( 17) 00:06:50.445 11292.357 - 11342.769: 98.3746% ( 16) 00:06:50.445 11342.769 - 11393.182: 98.4375% ( 12) 00:06:50.445 11393.182 - 11443.594: 98.5057% ( 13) 00:06:50.445 11443.594 - 11494.006: 98.5686% ( 12) 00:06:50.445 11494.006 - 11544.418: 98.5896% ( 4) 00:06:50.445 11544.418 - 11594.831: 98.6158% ( 5) 00:06:50.445 11594.831 - 11645.243: 98.6420% ( 5) 00:06:50.445 11645.243 - 11695.655: 98.6682% ( 5) 00:06:50.445 11695.655 - 11746.068: 98.6892% ( 4) 00:06:50.445 11746.068 - 11796.480: 98.6997% ( 2) 00:06:50.445 11796.480 - 11846.892: 98.7049% ( 1) 00:06:50.445 11846.892 - 11897.305: 98.7154% ( 2) 00:06:50.445 11897.305 - 11947.717: 98.7206% ( 1) 00:06:50.445 11947.717 - 11998.129: 98.7364% ( 3) 00:06:50.445 11998.129 - 12048.542: 98.7416% ( 1) 00:06:50.445 12048.542 - 12098.954: 98.7469% ( 1) 00:06:50.445 12098.954 - 12149.366: 98.7626% ( 3) 00:06:50.445 12149.366 - 12199.778: 98.7888% ( 5) 00:06:50.445 12199.778 - 12250.191: 98.8150% ( 5) 00:06:50.445 12250.191 - 12300.603: 98.8465% ( 6) 00:06:50.445 12300.603 - 12351.015: 98.8727% ( 5) 00:06:50.445 12351.015 - 12401.428: 98.8989% ( 5) 00:06:50.445 12401.428 - 12451.840: 98.9304% ( 6) 00:06:50.445 12451.840 - 12502.252: 98.9566% ( 5) 00:06:50.445 12502.252 - 12552.665: 98.9880% ( 6) 00:06:50.445 12552.665 - 12603.077: 99.0195% ( 6) 00:06:50.445 12603.077 - 12653.489: 99.0510% ( 6) 00:06:50.445 12653.489 - 12703.902: 99.0772% ( 5) 00:06:50.445 12703.902 - 12754.314: 99.1086% ( 6) 00:06:50.445 12754.314 - 12804.726: 99.1401% ( 6) 00:06:50.445 12804.726 - 12855.138: 99.1716% ( 6) 00:06:50.445 12855.138 - 12905.551: 99.2030% ( 6) 00:06:50.445 12905.551 - 13006.375: 99.2502% ( 9) 00:06:50.445 13006.375 - 13107.200: 99.2764% ( 5) 00:06:50.445 13107.200 - 13208.025: 99.2974% ( 4) 00:06:50.445 13208.025 - 13308.849: 99.3184% ( 4) 00:06:50.445 13308.849 - 13409.674: 99.3289% ( 2) 00:06:50.445 21576.468 - 21677.292: 99.3446% ( 3) 00:06:50.445 21677.292 - 21778.117: 99.3708% ( 5) 00:06:50.445 21778.117 - 21878.942: 99.3918% ( 4) 00:06:50.445 21878.942 - 21979.766: 99.4128% ( 4) 00:06:50.445 21979.766 - 22080.591: 99.4337% ( 4) 00:06:50.445 22080.591 - 22181.415: 99.4599% ( 5) 00:06:50.445 22181.415 - 22282.240: 99.4809% ( 4) 00:06:50.445 22282.240 - 22383.065: 99.5019% ( 4) 00:06:50.445 22383.065 - 22483.889: 99.5229% ( 4) 00:06:50.445 22483.889 - 22584.714: 99.5438% ( 4) 00:06:50.445 22584.714 - 22685.538: 99.5648% ( 4) 00:06:50.445 22685.538 - 22786.363: 99.5910% ( 5) 00:06:50.445 22786.363 - 22887.188: 99.6120% ( 4) 00:06:50.445 22887.188 - 22988.012: 99.6330% ( 4) 00:06:50.445 22988.012 - 23088.837: 99.6539% ( 4) 00:06:50.445 23088.837 - 23189.662: 99.6644% ( 2) 00:06:50.445 26012.751 - 26214.400: 99.7064% ( 8) 00:06:50.445 26214.400 - 26416.049: 99.7536% ( 9) 00:06:50.445 26416.049 - 26617.698: 99.8008% ( 9) 00:06:50.445 26617.698 - 26819.348: 99.8427% ( 8) 00:06:50.445 26819.348 - 27020.997: 99.8846% ( 8) 00:06:50.445 27020.997 - 27222.646: 99.9318% ( 9) 00:06:50.445 27222.646 - 27424.295: 99.9738% ( 8) 00:06:50.445 27424.295 - 27625.945: 100.0000% ( 5) 00:06:50.445 00:06:50.445 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:50.445 ============================================================================== 00:06:50.445 Range in us Cumulative IO count 00:06:50.445 5696.591 - 5721.797: 0.0262% ( 5) 00:06:50.445 5721.797 - 5747.003: 0.1154% ( 17) 00:06:50.445 5747.003 - 5772.209: 0.3356% ( 42) 00:06:50.445 5772.209 - 5797.415: 0.9018% ( 108) 00:06:50.445 5797.415 - 5822.622: 1.5992% ( 133) 00:06:50.445 5822.622 - 5847.828: 2.5849% ( 188) 00:06:50.445 5847.828 - 5873.034: 4.0111% ( 272) 00:06:50.445 5873.034 - 5898.240: 6.0665% ( 392) 00:06:50.445 5898.240 - 5923.446: 8.2162% ( 410) 00:06:50.445 5923.446 - 5948.652: 10.3765% ( 412) 00:06:50.445 5948.652 - 5973.858: 12.7255% ( 448) 00:06:50.445 5973.858 - 5999.065: 15.0010% ( 434) 00:06:50.445 5999.065 - 6024.271: 17.3500% ( 448) 00:06:50.445 6024.271 - 6049.477: 19.7934% ( 466) 00:06:50.445 6049.477 - 6074.683: 22.3049% ( 479) 00:06:50.445 6074.683 - 6099.889: 24.7588% ( 468) 00:06:50.445 6099.889 - 6125.095: 27.2074% ( 467) 00:06:50.445 6125.095 - 6150.302: 29.6718% ( 470) 00:06:50.445 6150.302 - 6175.508: 32.2043% ( 483) 00:06:50.445 6175.508 - 6200.714: 34.7158% ( 479) 00:06:50.445 6200.714 - 6225.920: 37.1749% ( 469) 00:06:50.445 6225.920 - 6251.126: 39.7336% ( 488) 00:06:50.445 6251.126 - 6276.332: 42.3396% ( 497) 00:06:50.445 6276.332 - 6301.538: 44.9140% ( 491) 00:06:50.445 6301.538 - 6326.745: 47.4675% ( 487) 00:06:50.445 6326.745 - 6351.951: 50.0891% ( 500) 00:06:50.445 6351.951 - 6377.157: 52.6164% ( 482) 00:06:50.445 6377.157 - 6402.363: 55.1594% ( 485) 00:06:50.445 6402.363 - 6427.569: 57.7024% ( 485) 00:06:50.445 6427.569 - 6452.775: 60.2506% ( 486) 00:06:50.445 6452.775 - 6503.188: 65.4520% ( 992) 00:06:50.445 6503.188 - 6553.600: 70.6376% ( 989) 00:06:50.445 6553.600 - 6604.012: 75.6449% ( 955) 00:06:50.445 6604.012 - 6654.425: 80.2538% ( 879) 00:06:50.445 6654.425 - 6704.837: 83.5361% ( 626) 00:06:50.445 6704.837 - 6755.249: 85.4079% ( 357) 00:06:50.445 6755.249 - 6805.662: 86.5195% ( 212) 00:06:50.445 6805.662 - 6856.074: 87.3427% ( 157) 00:06:50.445 6856.074 - 6906.486: 87.9719% ( 120) 00:06:50.445 6906.486 - 6956.898: 88.5172% ( 104) 00:06:50.445 6956.898 - 7007.311: 88.9629% ( 85) 00:06:50.445 7007.311 - 7057.723: 89.3456% ( 73) 00:06:50.445 7057.723 - 7108.135: 89.6707% ( 62) 00:06:50.445 7108.135 - 7158.548: 89.9591% ( 55) 00:06:50.445 7158.548 - 7208.960: 90.2160% ( 49) 00:06:50.445 7208.960 - 7259.372: 90.4572% ( 46) 00:06:50.445 7259.372 - 7309.785: 90.6984% ( 46) 00:06:50.445 7309.785 - 7360.197: 90.9448% ( 47) 00:06:50.445 7360.197 - 7410.609: 91.1808% ( 45) 00:06:50.445 7410.609 - 7461.022: 91.3591% ( 34) 00:06:50.445 7461.022 - 7511.434: 91.5635% ( 39) 00:06:50.445 7511.434 - 7561.846: 91.7576% ( 37) 00:06:50.445 7561.846 - 7612.258: 91.8991% ( 27) 00:06:50.445 7612.258 - 7662.671: 92.0407% ( 27) 00:06:50.445 7662.671 - 7713.083: 92.1508% ( 21) 00:06:50.445 7713.083 - 7763.495: 92.2557% ( 20) 00:06:50.445 7763.495 - 7813.908: 92.3553% ( 19) 00:06:50.445 7813.908 - 7864.320: 92.4549% ( 19) 00:06:50.445 7864.320 - 7914.732: 92.5336% ( 15) 00:06:50.445 7914.732 - 7965.145: 92.6174% ( 16) 00:06:50.445 7965.145 - 8015.557: 92.7066% ( 17) 00:06:50.445 8015.557 - 8065.969: 92.8010% ( 18) 00:06:50.445 8065.969 - 8116.382: 92.9006% ( 19) 00:06:50.445 8116.382 - 8166.794: 92.9897% ( 17) 00:06:50.445 8166.794 - 8217.206: 93.0841% ( 18) 00:06:50.445 8217.206 - 8267.618: 93.1628% ( 15) 00:06:50.445 8267.618 - 8318.031: 93.2571% ( 18) 00:06:50.445 8318.031 - 8368.443: 93.3463% ( 17) 00:06:50.445 8368.443 - 8418.855: 93.4406% ( 18) 00:06:50.445 8418.855 - 8469.268: 93.4878% ( 9) 00:06:50.445 8469.268 - 8519.680: 93.5455% ( 11) 00:06:50.445 8519.680 - 8570.092: 93.6137% ( 13) 00:06:50.445 8570.092 - 8620.505: 93.6923% ( 15) 00:06:50.445 8620.505 - 8670.917: 93.7395% ( 9) 00:06:50.445 8670.917 - 8721.329: 93.7972% ( 11) 00:06:50.445 8721.329 - 8771.742: 93.8601% ( 12) 00:06:50.445 8771.742 - 8822.154: 93.9230% ( 12) 00:06:50.446 8822.154 - 8872.566: 93.9807% ( 11) 00:06:50.446 8872.566 - 8922.978: 94.0436% ( 12) 00:06:50.446 8922.978 - 8973.391: 94.0961% ( 10) 00:06:50.446 8973.391 - 9023.803: 94.1590% ( 12) 00:06:50.446 9023.803 - 9074.215: 94.2219% ( 12) 00:06:50.446 9074.215 - 9124.628: 94.2743% ( 10) 00:06:50.446 9124.628 - 9175.040: 94.3687% ( 18) 00:06:50.446 9175.040 - 9225.452: 94.4474% ( 15) 00:06:50.446 9225.452 - 9275.865: 94.5208% ( 14) 00:06:50.446 9275.865 - 9326.277: 94.5994% ( 15) 00:06:50.446 9326.277 - 9376.689: 94.6938% ( 18) 00:06:50.446 9376.689 - 9427.102: 94.7672% ( 14) 00:06:50.446 9427.102 - 9477.514: 94.8354% ( 13) 00:06:50.446 9477.514 - 9527.926: 94.8983% ( 12) 00:06:50.446 9527.926 - 9578.338: 94.9560% ( 11) 00:06:50.446 9578.338 - 9628.751: 95.0451% ( 17) 00:06:50.446 9628.751 - 9679.163: 95.1237% ( 15) 00:06:50.446 9679.163 - 9729.575: 95.2181% ( 18) 00:06:50.446 9729.575 - 9779.988: 95.3282% ( 21) 00:06:50.446 9779.988 - 9830.400: 95.4279% ( 19) 00:06:50.446 9830.400 - 9880.812: 95.5327% ( 20) 00:06:50.446 9880.812 - 9931.225: 95.6481% ( 22) 00:06:50.446 9931.225 - 9981.637: 95.7424% ( 18) 00:06:50.446 9981.637 - 10032.049: 95.8368% ( 18) 00:06:50.446 10032.049 - 10082.462: 95.9365% ( 19) 00:06:50.446 10082.462 - 10132.874: 96.0361% ( 19) 00:06:50.446 10132.874 - 10183.286: 96.1462% ( 21) 00:06:50.446 10183.286 - 10233.698: 96.2510% ( 20) 00:06:50.446 10233.698 - 10284.111: 96.3297% ( 15) 00:06:50.446 10284.111 - 10334.523: 96.4346% ( 20) 00:06:50.446 10334.523 - 10384.935: 96.5185% ( 16) 00:06:50.446 10384.935 - 10435.348: 96.6391% ( 23) 00:06:50.446 10435.348 - 10485.760: 96.7544% ( 22) 00:06:50.446 10485.760 - 10536.172: 96.8802% ( 24) 00:06:50.446 10536.172 - 10586.585: 96.9904% ( 21) 00:06:50.446 10586.585 - 10636.997: 97.0847% ( 18) 00:06:50.446 10636.997 - 10687.409: 97.1896% ( 20) 00:06:50.446 10687.409 - 10737.822: 97.3102% ( 23) 00:06:50.446 10737.822 - 10788.234: 97.4203% ( 21) 00:06:50.446 10788.234 - 10838.646: 97.5461% ( 24) 00:06:50.446 10838.646 - 10889.058: 97.6562% ( 21) 00:06:50.446 10889.058 - 10939.471: 97.7611% ( 20) 00:06:50.446 10939.471 - 10989.883: 97.8712% ( 21) 00:06:50.446 10989.883 - 11040.295: 97.9761% ( 20) 00:06:50.446 11040.295 - 11090.708: 98.0652% ( 17) 00:06:50.446 11090.708 - 11141.120: 98.1648% ( 19) 00:06:50.446 11141.120 - 11191.532: 98.2645% ( 19) 00:06:50.446 11191.532 - 11241.945: 98.3589% ( 18) 00:06:50.446 11241.945 - 11292.357: 98.4480% ( 17) 00:06:50.446 11292.357 - 11342.769: 98.5109% ( 12) 00:06:50.446 11342.769 - 11393.182: 98.5738% ( 12) 00:06:50.446 11393.182 - 11443.594: 98.6472% ( 14) 00:06:50.446 11443.594 - 11494.006: 98.6892% ( 8) 00:06:50.446 11494.006 - 11544.418: 98.7311% ( 8) 00:06:50.446 11544.418 - 11594.831: 98.7573% ( 5) 00:06:50.446 11594.831 - 11645.243: 98.7731% ( 3) 00:06:50.446 11645.243 - 11695.655: 98.7888% ( 3) 00:06:50.446 11695.655 - 11746.068: 98.7993% ( 2) 00:06:50.446 11746.068 - 11796.480: 98.8150% ( 3) 00:06:50.446 11796.480 - 11846.892: 98.8255% ( 2) 00:06:50.446 11846.892 - 11897.305: 98.8360% ( 2) 00:06:50.446 11897.305 - 11947.717: 98.8517% ( 3) 00:06:50.446 11947.717 - 11998.129: 98.8622% ( 2) 00:06:50.446 11998.129 - 12048.542: 98.8779% ( 3) 00:06:50.446 12048.542 - 12098.954: 98.8884% ( 2) 00:06:50.446 12098.954 - 12149.366: 98.8989% ( 2) 00:06:50.446 12149.366 - 12199.778: 98.9304% ( 6) 00:06:50.446 12199.778 - 12250.191: 98.9618% ( 6) 00:06:50.446 12250.191 - 12300.603: 98.9880% ( 5) 00:06:50.446 12300.603 - 12351.015: 99.0300% ( 8) 00:06:50.446 12351.015 - 12401.428: 99.0615% ( 6) 00:06:50.446 12401.428 - 12451.840: 99.0929% ( 6) 00:06:50.446 12451.840 - 12502.252: 99.1191% ( 5) 00:06:50.446 12502.252 - 12552.665: 99.1506% ( 6) 00:06:50.446 12552.665 - 12603.077: 99.1716% ( 4) 00:06:50.446 12603.077 - 12653.489: 99.1873% ( 3) 00:06:50.446 12653.489 - 12703.902: 99.2083% ( 4) 00:06:50.446 12703.902 - 12754.314: 99.2292% ( 4) 00:06:50.446 12754.314 - 12804.726: 99.2502% ( 4) 00:06:50.446 12804.726 - 12855.138: 99.2712% ( 4) 00:06:50.446 12855.138 - 12905.551: 99.2869% ( 3) 00:06:50.446 12905.551 - 13006.375: 99.3289% ( 8) 00:06:50.446 19761.625 - 19862.449: 99.3341% ( 1) 00:06:50.446 19862.449 - 19963.274: 99.3551% ( 4) 00:06:50.446 19963.274 - 20064.098: 99.3760% ( 4) 00:06:50.446 20064.098 - 20164.923: 99.4023% ( 5) 00:06:50.446 20164.923 - 20265.748: 99.4232% ( 4) 00:06:50.446 20265.748 - 20366.572: 99.4442% ( 4) 00:06:50.446 20366.572 - 20467.397: 99.4652% ( 4) 00:06:50.446 20467.397 - 20568.222: 99.4914% ( 5) 00:06:50.446 20568.222 - 20669.046: 99.5124% ( 4) 00:06:50.446 20669.046 - 20769.871: 99.5333% ( 4) 00:06:50.446 20769.871 - 20870.695: 99.5543% ( 4) 00:06:50.446 20870.695 - 20971.520: 99.5753% ( 4) 00:06:50.446 20971.520 - 21072.345: 99.6015% ( 5) 00:06:50.446 21072.345 - 21173.169: 99.6172% ( 3) 00:06:50.446 21173.169 - 21273.994: 99.6435% ( 5) 00:06:50.446 21273.994 - 21374.818: 99.6644% ( 4) 00:06:50.446 24197.908 - 24298.732: 99.6802% ( 3) 00:06:50.446 24298.732 - 24399.557: 99.7011% ( 4) 00:06:50.446 24399.557 - 24500.382: 99.7273% ( 5) 00:06:50.446 24500.382 - 24601.206: 99.7483% ( 4) 00:06:50.446 24601.206 - 24702.031: 99.7693% ( 4) 00:06:50.446 24702.031 - 24802.855: 99.7903% ( 4) 00:06:50.446 24802.855 - 24903.680: 99.8112% ( 4) 00:06:50.446 24903.680 - 25004.505: 99.8322% ( 4) 00:06:50.446 25004.505 - 25105.329: 99.8532% ( 4) 00:06:50.446 25105.329 - 25206.154: 99.8794% ( 5) 00:06:50.446 25206.154 - 25306.978: 99.9004% ( 4) 00:06:50.446 25306.978 - 25407.803: 99.9214% ( 4) 00:06:50.446 25407.803 - 25508.628: 99.9423% ( 4) 00:06:50.446 25508.628 - 25609.452: 99.9581% ( 3) 00:06:50.446 25609.452 - 25710.277: 99.9843% ( 5) 00:06:50.446 25710.277 - 25811.102: 100.0000% ( 3) 00:06:50.446 00:06:50.446 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:50.446 ============================================================================== 00:06:50.446 Range in us Cumulative IO count 00:06:50.446 5671.385 - 5696.591: 0.0314% ( 6) 00:06:50.446 5696.591 - 5721.797: 0.0627% ( 6) 00:06:50.446 5721.797 - 5747.003: 0.1829% ( 23) 00:06:50.446 5747.003 - 5772.209: 0.4808% ( 57) 00:06:50.446 5772.209 - 5797.415: 0.9824% ( 96) 00:06:50.446 5797.415 - 5822.622: 1.7402% ( 145) 00:06:50.446 5822.622 - 5847.828: 2.8689% ( 216) 00:06:50.446 5847.828 - 5873.034: 4.4576% ( 304) 00:06:50.446 5873.034 - 5898.240: 6.1977% ( 333) 00:06:50.446 5898.240 - 5923.446: 8.1417% ( 372) 00:06:50.446 5923.446 - 5948.652: 10.3156% ( 416) 00:06:50.446 5948.652 - 5973.858: 12.5888% ( 435) 00:06:50.446 5973.858 - 5999.065: 14.9666% ( 455) 00:06:50.446 5999.065 - 6024.271: 17.4697% ( 479) 00:06:50.446 6024.271 - 6049.477: 19.9049% ( 466) 00:06:50.446 6049.477 - 6074.683: 22.3923% ( 476) 00:06:50.446 6074.683 - 6099.889: 24.8903% ( 478) 00:06:50.446 6099.889 - 6125.095: 27.3046% ( 462) 00:06:50.446 6125.095 - 6150.302: 29.7607% ( 470) 00:06:50.446 6150.302 - 6175.508: 32.2481% ( 476) 00:06:50.446 6175.508 - 6200.714: 34.7565% ( 480) 00:06:50.446 6200.714 - 6225.920: 37.2283% ( 473) 00:06:50.446 6225.920 - 6251.126: 39.8359% ( 499) 00:06:50.446 6251.126 - 6276.332: 42.3652% ( 484) 00:06:50.446 6276.332 - 6301.538: 44.9990% ( 504) 00:06:50.446 6301.538 - 6326.745: 47.4969% ( 478) 00:06:50.446 6326.745 - 6351.951: 50.0000% ( 479) 00:06:50.446 6351.951 - 6377.157: 52.5815% ( 494) 00:06:50.446 6377.157 - 6402.363: 55.1578% ( 493) 00:06:50.446 6402.363 - 6427.569: 57.7446% ( 495) 00:06:50.446 6427.569 - 6452.775: 60.3574% ( 500) 00:06:50.446 6452.775 - 6503.188: 65.4473% ( 974) 00:06:50.446 6503.188 - 6553.600: 70.5424% ( 975) 00:06:50.446 6553.600 - 6604.012: 75.5905% ( 966) 00:06:50.446 6604.012 - 6654.425: 80.1108% ( 865) 00:06:50.446 6654.425 - 6704.837: 83.4344% ( 636) 00:06:50.446 6704.837 - 6755.249: 85.4306% ( 382) 00:06:50.446 6755.249 - 6805.662: 86.5437% ( 213) 00:06:50.446 6805.662 - 6856.074: 87.2648% ( 138) 00:06:50.446 6856.074 - 6906.486: 87.8710% ( 116) 00:06:50.446 6906.486 - 6956.898: 88.4145% ( 104) 00:06:50.446 6956.898 - 7007.311: 88.8639% ( 86) 00:06:50.446 7007.311 - 7057.723: 89.2611% ( 76) 00:06:50.446 7057.723 - 7108.135: 89.5903% ( 63) 00:06:50.446 7108.135 - 7158.548: 89.8620% ( 52) 00:06:50.446 7158.548 - 7208.960: 90.1077% ( 47) 00:06:50.446 7208.960 - 7259.372: 90.3271% ( 42) 00:06:50.446 7259.372 - 7309.785: 90.5675% ( 46) 00:06:50.446 7309.785 - 7360.197: 90.7922% ( 43) 00:06:50.446 7360.197 - 7410.609: 91.0065% ( 41) 00:06:50.446 7410.609 - 7461.022: 91.2103% ( 39) 00:06:50.446 7461.022 - 7511.434: 91.3827% ( 33) 00:06:50.446 7511.434 - 7561.846: 91.5447% ( 31) 00:06:50.446 7561.846 - 7612.258: 91.7067% ( 31) 00:06:50.446 7612.258 - 7662.671: 91.8478% ( 27) 00:06:50.446 7662.671 - 7713.083: 91.9732% ( 24) 00:06:50.446 7713.083 - 7763.495: 92.0778% ( 20) 00:06:50.446 7763.495 - 7813.908: 92.1718% ( 18) 00:06:50.446 7813.908 - 7864.320: 92.2659% ( 18) 00:06:50.446 7864.320 - 7914.732: 92.3652% ( 19) 00:06:50.446 7914.732 - 7965.145: 92.4749% ( 21) 00:06:50.446 7965.145 - 8015.557: 92.5847% ( 21) 00:06:50.446 8015.557 - 8065.969: 92.7048% ( 23) 00:06:50.446 8065.969 - 8116.382: 92.8146% ( 21) 00:06:50.446 8116.382 - 8166.794: 92.9034% ( 17) 00:06:50.446 8166.794 - 8217.206: 92.9714% ( 13) 00:06:50.447 8217.206 - 8267.618: 93.0445% ( 14) 00:06:50.447 8267.618 - 8318.031: 93.1177% ( 14) 00:06:50.447 8318.031 - 8368.443: 93.1856% ( 13) 00:06:50.447 8368.443 - 8418.855: 93.2588% ( 14) 00:06:50.447 8418.855 - 8469.268: 93.3424% ( 16) 00:06:50.447 8469.268 - 8519.680: 93.4103% ( 13) 00:06:50.447 8519.680 - 8570.092: 93.4939% ( 16) 00:06:50.447 8570.092 - 8620.505: 93.5776% ( 16) 00:06:50.447 8620.505 - 8670.917: 93.6559% ( 15) 00:06:50.447 8670.917 - 8721.329: 93.7552% ( 19) 00:06:50.447 8721.329 - 8771.742: 93.8597% ( 20) 00:06:50.447 8771.742 - 8822.154: 93.9381% ( 15) 00:06:50.447 8822.154 - 8872.566: 94.0165% ( 15) 00:06:50.447 8872.566 - 8922.978: 94.1054% ( 17) 00:06:50.447 8922.978 - 8973.391: 94.2360% ( 25) 00:06:50.447 8973.391 - 9023.803: 94.3353% ( 19) 00:06:50.447 9023.803 - 9074.215: 94.4450% ( 21) 00:06:50.447 9074.215 - 9124.628: 94.5234% ( 15) 00:06:50.447 9124.628 - 9175.040: 94.6122% ( 17) 00:06:50.447 9175.040 - 9225.452: 94.7115% ( 19) 00:06:50.447 9225.452 - 9275.865: 94.7952% ( 16) 00:06:50.447 9275.865 - 9326.277: 94.8944% ( 19) 00:06:50.447 9326.277 - 9376.689: 94.9781% ( 16) 00:06:50.447 9376.689 - 9427.102: 95.0669% ( 17) 00:06:50.447 9427.102 - 9477.514: 95.1401% ( 14) 00:06:50.447 9477.514 - 9527.926: 95.2080% ( 13) 00:06:50.447 9527.926 - 9578.338: 95.2811% ( 14) 00:06:50.447 9578.338 - 9628.751: 95.3543% ( 14) 00:06:50.447 9628.751 - 9679.163: 95.4222% ( 13) 00:06:50.447 9679.163 - 9729.575: 95.4954% ( 14) 00:06:50.447 9729.575 - 9779.988: 95.5738% ( 15) 00:06:50.447 9779.988 - 9830.400: 95.6313% ( 11) 00:06:50.447 9830.400 - 9880.812: 95.7044% ( 14) 00:06:50.447 9880.812 - 9931.225: 95.7724% ( 13) 00:06:50.447 9931.225 - 9981.637: 95.8403% ( 13) 00:06:50.447 9981.637 - 10032.049: 95.9082% ( 13) 00:06:50.447 10032.049 - 10082.462: 95.9553% ( 9) 00:06:50.447 10082.462 - 10132.874: 96.0075% ( 10) 00:06:50.447 10132.874 - 10183.286: 96.0546% ( 9) 00:06:50.447 10183.286 - 10233.698: 96.1016% ( 9) 00:06:50.447 10233.698 - 10284.111: 96.1591% ( 11) 00:06:50.447 10284.111 - 10334.523: 96.2479% ( 17) 00:06:50.447 10334.523 - 10384.935: 96.3315% ( 16) 00:06:50.447 10384.935 - 10435.348: 96.4360% ( 20) 00:06:50.447 10435.348 - 10485.760: 96.5406% ( 20) 00:06:50.447 10485.760 - 10536.172: 96.6503% ( 21) 00:06:50.447 10536.172 - 10586.585: 96.7757% ( 24) 00:06:50.447 10586.585 - 10636.997: 96.8907% ( 22) 00:06:50.447 10636.997 - 10687.409: 97.0004% ( 21) 00:06:50.447 10687.409 - 10737.822: 97.0893% ( 17) 00:06:50.447 10737.822 - 10788.234: 97.1781% ( 17) 00:06:50.447 10788.234 - 10838.646: 97.2565% ( 15) 00:06:50.447 10838.646 - 10889.058: 97.3401% ( 16) 00:06:50.447 10889.058 - 10939.471: 97.4289% ( 17) 00:06:50.447 10939.471 - 10989.883: 97.5230% ( 18) 00:06:50.447 10989.883 - 11040.295: 97.6380% ( 22) 00:06:50.447 11040.295 - 11090.708: 97.7529% ( 22) 00:06:50.447 11090.708 - 11141.120: 97.8574% ( 20) 00:06:50.447 11141.120 - 11191.532: 97.9724% ( 22) 00:06:50.447 11191.532 - 11241.945: 98.0508% ( 15) 00:06:50.447 11241.945 - 11292.357: 98.1344% ( 16) 00:06:50.447 11292.357 - 11342.769: 98.2076% ( 14) 00:06:50.447 11342.769 - 11393.182: 98.2964% ( 17) 00:06:50.447 11393.182 - 11443.594: 98.3800% ( 16) 00:06:50.447 11443.594 - 11494.006: 98.4532% ( 14) 00:06:50.447 11494.006 - 11544.418: 98.5159% ( 12) 00:06:50.447 11544.418 - 11594.831: 98.6099% ( 18) 00:06:50.447 11594.831 - 11645.243: 98.6779% ( 13) 00:06:50.447 11645.243 - 11695.655: 98.7458% ( 13) 00:06:50.447 11695.655 - 11746.068: 98.8242% ( 15) 00:06:50.447 11746.068 - 11796.480: 98.9026% ( 15) 00:06:50.447 11796.480 - 11846.892: 98.9758% ( 14) 00:06:50.447 11846.892 - 11897.305: 99.0071% ( 6) 00:06:50.447 11897.305 - 11947.717: 99.0332% ( 5) 00:06:50.447 11947.717 - 11998.129: 99.0541% ( 4) 00:06:50.447 11998.129 - 12048.542: 99.0750% ( 4) 00:06:50.447 12048.542 - 12098.954: 99.1012% ( 5) 00:06:50.447 12098.954 - 12149.366: 99.1221% ( 4) 00:06:50.447 12149.366 - 12199.778: 99.1430% ( 4) 00:06:50.447 12199.778 - 12250.191: 99.1639% ( 4) 00:06:50.447 12250.191 - 12300.603: 99.1900% ( 5) 00:06:50.447 12300.603 - 12351.015: 99.2057% ( 3) 00:06:50.447 12351.015 - 12401.428: 99.2161% ( 2) 00:06:50.447 12401.428 - 12451.840: 99.2266% ( 2) 00:06:50.447 12451.840 - 12502.252: 99.2370% ( 2) 00:06:50.447 12502.252 - 12552.665: 99.2475% ( 2) 00:06:50.447 12552.665 - 12603.077: 99.2579% ( 2) 00:06:50.447 12603.077 - 12653.489: 99.2684% ( 2) 00:06:50.447 12653.489 - 12703.902: 99.2788% ( 2) 00:06:50.447 12703.902 - 12754.314: 99.2893% ( 2) 00:06:50.447 12754.314 - 12804.726: 99.2997% ( 2) 00:06:50.447 12804.726 - 12855.138: 99.3102% ( 2) 00:06:50.447 12855.138 - 12905.551: 99.3207% ( 2) 00:06:50.447 12905.551 - 13006.375: 99.3311% ( 2) 00:06:50.447 14619.569 - 14720.394: 99.3363% ( 1) 00:06:50.447 14720.394 - 14821.218: 99.3572% ( 4) 00:06:50.447 14821.218 - 14922.043: 99.3781% ( 4) 00:06:50.447 14922.043 - 15022.868: 99.3990% ( 4) 00:06:50.447 15022.868 - 15123.692: 99.4199% ( 4) 00:06:50.447 15123.692 - 15224.517: 99.4408% ( 4) 00:06:50.447 15224.517 - 15325.342: 99.4617% ( 4) 00:06:50.447 15325.342 - 15426.166: 99.4827% ( 4) 00:06:50.447 15426.166 - 15526.991: 99.5036% ( 4) 00:06:50.447 15526.991 - 15627.815: 99.5297% ( 5) 00:06:50.447 15627.815 - 15728.640: 99.5506% ( 4) 00:06:50.447 15728.640 - 15829.465: 99.5715% ( 4) 00:06:50.447 15829.465 - 15930.289: 99.5976% ( 5) 00:06:50.447 15930.289 - 16031.114: 99.6185% ( 4) 00:06:50.447 16031.114 - 16131.938: 99.6394% ( 4) 00:06:50.447 16131.938 - 16232.763: 99.6603% ( 4) 00:06:50.447 16232.763 - 16333.588: 99.6656% ( 1) 00:06:50.447 19156.677 - 19257.502: 99.6760% ( 2) 00:06:50.447 19257.502 - 19358.326: 99.6969% ( 4) 00:06:50.447 19358.326 - 19459.151: 99.7230% ( 5) 00:06:50.447 19459.151 - 19559.975: 99.7439% ( 4) 00:06:50.447 19559.975 - 19660.800: 99.7648% ( 4) 00:06:50.447 19660.800 - 19761.625: 99.7857% ( 4) 00:06:50.447 19761.625 - 19862.449: 99.8119% ( 5) 00:06:50.447 19862.449 - 19963.274: 99.8328% ( 4) 00:06:50.447 19963.274 - 20064.098: 99.8537% ( 4) 00:06:50.447 20064.098 - 20164.923: 99.8746% ( 4) 00:06:50.447 20164.923 - 20265.748: 99.9007% ( 5) 00:06:50.447 20265.748 - 20366.572: 99.9216% ( 4) 00:06:50.447 20366.572 - 20467.397: 99.9425% ( 4) 00:06:50.447 20467.397 - 20568.222: 99.9634% ( 4) 00:06:50.447 20568.222 - 20669.046: 99.9843% ( 4) 00:06:50.447 20669.046 - 20769.871: 100.0000% ( 3) 00:06:50.447 00:06:50.447 01:26:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:06:51.822 Initializing NVMe Controllers 00:06:51.822 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:51.822 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:51.822 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:51.822 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:51.822 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:51.822 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:51.822 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:51.822 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:51.822 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:51.822 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:51.822 Initialization complete. Launching workers. 00:06:51.822 ======================================================== 00:06:51.822 Latency(us) 00:06:51.822 Device Information : IOPS MiB/s Average min max 00:06:51.822 PCIE (0000:00:10.0) NSID 1 from core 0: 17566.06 205.85 7296.30 5532.62 31959.81 00:06:51.822 PCIE (0000:00:11.0) NSID 1 from core 0: 17566.06 205.85 7284.94 5781.36 30114.75 00:06:51.822 PCIE (0000:00:13.0) NSID 1 from core 0: 17566.06 205.85 7273.51 5611.21 28742.06 00:06:51.822 PCIE (0000:00:12.0) NSID 1 from core 0: 17566.06 205.85 7262.14 5664.62 26958.33 00:06:51.822 PCIE (0000:00:12.0) NSID 2 from core 0: 17566.06 205.85 7250.90 5689.19 25215.17 00:06:51.822 PCIE (0000:00:12.0) NSID 3 from core 0: 17629.94 206.60 7213.66 5761.75 19901.17 00:06:51.822 ======================================================== 00:06:51.822 Total : 105460.25 1235.86 7263.54 5532.62 31959.81 00:06:51.822 00:06:51.822 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:51.822 ================================================================================= 00:06:51.822 1.00000% : 6099.889us 00:06:51.822 10.00000% : 6402.363us 00:06:51.822 25.00000% : 6604.012us 00:06:51.822 50.00000% : 6956.898us 00:06:51.822 75.00000% : 7511.434us 00:06:51.822 90.00000% : 8418.855us 00:06:51.822 95.00000% : 8872.566us 00:06:51.822 98.00000% : 9477.514us 00:06:51.822 99.00000% : 11746.068us 00:06:51.822 99.50000% : 25508.628us 00:06:51.822 99.90000% : 31658.929us 00:06:51.822 99.99000% : 32062.228us 00:06:51.822 99.99900% : 32062.228us 00:06:51.822 99.99990% : 32062.228us 00:06:51.822 99.99999% : 32062.228us 00:06:51.822 00:06:51.822 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:51.822 ================================================================================= 00:06:51.822 1.00000% : 6099.889us 00:06:51.822 10.00000% : 6503.188us 00:06:51.822 25.00000% : 6654.425us 00:06:51.822 50.00000% : 6906.486us 00:06:51.822 75.00000% : 7461.022us 00:06:51.822 90.00000% : 8418.855us 00:06:51.822 95.00000% : 8822.154us 00:06:51.822 98.00000% : 9326.277us 00:06:51.822 99.00000% : 11846.892us 00:06:51.822 99.50000% : 24399.557us 00:06:51.822 99.90000% : 29844.086us 00:06:51.822 99.99000% : 30247.385us 00:06:51.822 99.99900% : 30247.385us 00:06:51.822 99.99990% : 30247.385us 00:06:51.822 99.99999% : 30247.385us 00:06:51.822 00:06:51.822 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:51.822 ================================================================================= 00:06:51.822 1.00000% : 6125.095us 00:06:51.822 10.00000% : 6503.188us 00:06:51.822 25.00000% : 6654.425us 00:06:51.822 50.00000% : 6906.486us 00:06:51.822 75.00000% : 7410.609us 00:06:51.822 90.00000% : 8418.855us 00:06:51.822 95.00000% : 8822.154us 00:06:51.822 98.00000% : 9376.689us 00:06:51.822 99.00000% : 11998.129us 00:06:51.822 99.50000% : 23492.135us 00:06:51.822 99.90000% : 28432.542us 00:06:51.822 99.99000% : 28835.840us 00:06:51.822 99.99900% : 28835.840us 00:06:51.822 99.99990% : 28835.840us 00:06:51.822 99.99999% : 28835.840us 00:06:51.822 00:06:51.822 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:51.822 ================================================================================= 00:06:51.822 1.00000% : 6150.302us 00:06:51.822 10.00000% : 6503.188us 00:06:51.822 25.00000% : 6654.425us 00:06:51.822 50.00000% : 6906.486us 00:06:51.823 75.00000% : 7410.609us 00:06:51.823 90.00000% : 8418.855us 00:06:51.823 95.00000% : 8771.742us 00:06:51.823 98.00000% : 9326.277us 00:06:51.823 99.00000% : 11998.129us 00:06:51.823 99.50000% : 21677.292us 00:06:51.823 99.90000% : 26617.698us 00:06:51.823 99.99000% : 27020.997us 00:06:51.823 99.99900% : 27020.997us 00:06:51.823 99.99990% : 27020.997us 00:06:51.823 99.99999% : 27020.997us 00:06:51.823 00:06:51.823 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:51.823 ================================================================================= 00:06:51.823 1.00000% : 6150.302us 00:06:51.823 10.00000% : 6503.188us 00:06:51.823 25.00000% : 6654.425us 00:06:51.823 50.00000% : 6906.486us 00:06:51.823 75.00000% : 7410.609us 00:06:51.823 90.00000% : 8368.443us 00:06:51.823 95.00000% : 8721.329us 00:06:51.823 98.00000% : 9326.277us 00:06:51.823 99.00000% : 11292.357us 00:06:51.823 99.50000% : 19862.449us 00:06:51.823 99.90000% : 24802.855us 00:06:51.823 99.99000% : 25206.154us 00:06:51.823 99.99900% : 25306.978us 00:06:51.823 99.99990% : 25306.978us 00:06:51.823 99.99999% : 25306.978us 00:06:51.823 00:06:51.823 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:51.823 ================================================================================= 00:06:51.823 1.00000% : 6150.302us 00:06:51.823 10.00000% : 6503.188us 00:06:51.823 25.00000% : 6654.425us 00:06:51.823 50.00000% : 6906.486us 00:06:51.823 75.00000% : 7511.434us 00:06:51.823 90.00000% : 8368.443us 00:06:51.823 95.00000% : 8771.742us 00:06:51.823 98.00000% : 9326.277us 00:06:51.823 99.00000% : 11191.532us 00:06:51.823 99.50000% : 13712.148us 00:06:51.823 99.90000% : 19559.975us 00:06:51.823 99.99000% : 19963.274us 00:06:51.823 99.99900% : 19963.274us 00:06:51.823 99.99990% : 19963.274us 00:06:51.823 99.99999% : 19963.274us 00:06:51.823 00:06:51.823 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:51.823 ============================================================================== 00:06:51.823 Range in us Cumulative IO count 00:06:51.823 5520.148 - 5545.354: 0.0057% ( 1) 00:06:51.823 5545.354 - 5570.560: 0.0170% ( 2) 00:06:51.823 5570.560 - 5595.766: 0.0284% ( 2) 00:06:51.823 5595.766 - 5620.972: 0.0398% ( 2) 00:06:51.823 5620.972 - 5646.178: 0.0511% ( 2) 00:06:51.823 5646.178 - 5671.385: 0.0568% ( 1) 00:06:51.823 5671.385 - 5696.591: 0.0682% ( 2) 00:06:51.823 5721.797 - 5747.003: 0.0795% ( 2) 00:06:51.823 5747.003 - 5772.209: 0.1023% ( 4) 00:06:51.823 5772.209 - 5797.415: 0.1250% ( 4) 00:06:51.823 5797.415 - 5822.622: 0.1591% ( 6) 00:06:51.823 5822.622 - 5847.828: 0.2216% ( 11) 00:06:51.823 5847.828 - 5873.034: 0.3068% ( 15) 00:06:51.823 5873.034 - 5898.240: 0.3636% ( 10) 00:06:51.823 5898.240 - 5923.446: 0.3920% ( 5) 00:06:51.823 5923.446 - 5948.652: 0.4489% ( 10) 00:06:51.823 5948.652 - 5973.858: 0.4886% ( 7) 00:06:51.823 5973.858 - 5999.065: 0.5398% ( 9) 00:06:51.823 5999.065 - 6024.271: 0.6705% ( 23) 00:06:51.823 6024.271 - 6049.477: 0.7841% ( 20) 00:06:51.823 6049.477 - 6074.683: 0.9034% ( 21) 00:06:51.823 6074.683 - 6099.889: 1.0966% ( 34) 00:06:51.823 6099.889 - 6125.095: 1.4886% ( 69) 00:06:51.823 6125.095 - 6150.302: 1.8864% ( 70) 00:06:51.823 6150.302 - 6175.508: 2.2386% ( 62) 00:06:51.823 6175.508 - 6200.714: 2.6705% ( 76) 00:06:51.823 6200.714 - 6225.920: 3.3125% ( 113) 00:06:51.823 6225.920 - 6251.126: 4.1307% ( 144) 00:06:51.823 6251.126 - 6276.332: 4.8352% ( 124) 00:06:51.823 6276.332 - 6301.538: 5.4148% ( 102) 00:06:51.823 6301.538 - 6326.745: 6.1875% ( 136) 00:06:51.823 6326.745 - 6351.951: 7.4773% ( 227) 00:06:51.823 6351.951 - 6377.157: 8.8636% ( 244) 00:06:51.823 6377.157 - 6402.363: 10.4773% ( 284) 00:06:51.823 6402.363 - 6427.569: 12.4716% ( 351) 00:06:51.823 6427.569 - 6452.775: 14.6818% ( 389) 00:06:51.823 6452.775 - 6503.188: 19.4375% ( 837) 00:06:51.823 6503.188 - 6553.600: 23.8409% ( 775) 00:06:51.823 6553.600 - 6604.012: 28.1932% ( 766) 00:06:51.823 6604.012 - 6654.425: 31.8864% ( 650) 00:06:51.823 6654.425 - 6704.837: 35.4034% ( 619) 00:06:51.823 6704.837 - 6755.249: 38.7386% ( 587) 00:06:51.823 6755.249 - 6805.662: 42.3182% ( 630) 00:06:51.823 6805.662 - 6856.074: 45.5341% ( 566) 00:06:51.823 6856.074 - 6906.486: 48.9091% ( 594) 00:06:51.823 6906.486 - 6956.898: 52.1875% ( 577) 00:06:51.823 6956.898 - 7007.311: 55.4602% ( 576) 00:06:51.823 7007.311 - 7057.723: 58.0739% ( 460) 00:06:51.823 7057.723 - 7108.135: 60.5739% ( 440) 00:06:51.823 7108.135 - 7158.548: 63.2670% ( 474) 00:06:51.823 7158.548 - 7208.960: 65.9716% ( 476) 00:06:51.823 7208.960 - 7259.372: 68.0625% ( 368) 00:06:51.823 7259.372 - 7309.785: 69.7670% ( 300) 00:06:51.823 7309.785 - 7360.197: 71.5511% ( 314) 00:06:51.823 7360.197 - 7410.609: 73.2557% ( 300) 00:06:51.823 7410.609 - 7461.022: 74.6136% ( 239) 00:06:51.823 7461.022 - 7511.434: 75.6648% ( 185) 00:06:51.823 7511.434 - 7561.846: 76.8409% ( 207) 00:06:51.823 7561.846 - 7612.258: 77.8693% ( 181) 00:06:51.823 7612.258 - 7662.671: 78.8523% ( 173) 00:06:51.823 7662.671 - 7713.083: 79.6307% ( 137) 00:06:51.823 7713.083 - 7763.495: 80.4602% ( 146) 00:06:51.823 7763.495 - 7813.908: 81.2670% ( 142) 00:06:51.823 7813.908 - 7864.320: 81.8750% ( 107) 00:06:51.823 7864.320 - 7914.732: 82.6648% ( 139) 00:06:51.823 7914.732 - 7965.145: 83.5114% ( 149) 00:06:51.823 7965.145 - 8015.557: 84.3239% ( 143) 00:06:51.823 8015.557 - 8065.969: 85.0852% ( 134) 00:06:51.823 8065.969 - 8116.382: 85.9545% ( 153) 00:06:51.823 8116.382 - 8166.794: 86.8011% ( 149) 00:06:51.823 8166.794 - 8217.206: 87.4830% ( 120) 00:06:51.823 8217.206 - 8267.618: 88.2784% ( 140) 00:06:51.823 8267.618 - 8318.031: 89.0170% ( 130) 00:06:51.823 8318.031 - 8368.443: 89.6875% ( 118) 00:06:51.823 8368.443 - 8418.855: 90.4375% ( 132) 00:06:51.823 8418.855 - 8469.268: 91.1705% ( 129) 00:06:51.823 8469.268 - 8519.680: 91.8750% ( 124) 00:06:51.823 8519.680 - 8570.092: 92.4375% ( 99) 00:06:51.823 8570.092 - 8620.505: 93.0170% ( 102) 00:06:51.823 8620.505 - 8670.917: 93.4375% ( 74) 00:06:51.823 8670.917 - 8721.329: 93.9659% ( 93) 00:06:51.823 8721.329 - 8771.742: 94.3807% ( 73) 00:06:51.823 8771.742 - 8822.154: 94.7386% ( 63) 00:06:51.823 8822.154 - 8872.566: 95.1648% ( 75) 00:06:51.823 8872.566 - 8922.978: 95.6364% ( 83) 00:06:51.824 8922.978 - 8973.391: 96.0341% ( 70) 00:06:51.824 8973.391 - 9023.803: 96.3409% ( 54) 00:06:51.824 9023.803 - 9074.215: 96.6307% ( 51) 00:06:51.824 9074.215 - 9124.628: 96.8750% ( 43) 00:06:51.824 9124.628 - 9175.040: 97.0852% ( 37) 00:06:51.824 9175.040 - 9225.452: 97.2500% ( 29) 00:06:51.824 9225.452 - 9275.865: 97.4716% ( 39) 00:06:51.824 9275.865 - 9326.277: 97.6307% ( 28) 00:06:51.824 9326.277 - 9376.689: 97.7614% ( 23) 00:06:51.824 9376.689 - 9427.102: 97.9773% ( 38) 00:06:51.824 9427.102 - 9477.514: 98.1534% ( 31) 00:06:51.824 9477.514 - 9527.926: 98.2955% ( 25) 00:06:51.824 9527.926 - 9578.338: 98.3580% ( 11) 00:06:51.824 9578.338 - 9628.751: 98.4545% ( 17) 00:06:51.824 9628.751 - 9679.163: 98.5341% ( 14) 00:06:51.824 9679.163 - 9729.575: 98.6080% ( 13) 00:06:51.824 9729.575 - 9779.988: 98.6420% ( 6) 00:06:51.824 9779.988 - 9830.400: 98.6761% ( 6) 00:06:51.824 9830.400 - 9880.812: 98.7102% ( 6) 00:06:51.824 9880.812 - 9931.225: 98.7330% ( 4) 00:06:51.824 9931.225 - 9981.637: 98.7614% ( 5) 00:06:51.824 9981.637 - 10032.049: 98.7784% ( 3) 00:06:51.824 10032.049 - 10082.462: 98.8068% ( 5) 00:06:51.824 10082.462 - 10132.874: 98.8239% ( 3) 00:06:51.824 10183.286 - 10233.698: 98.8295% ( 1) 00:06:51.824 10334.523 - 10384.935: 98.8523% ( 4) 00:06:51.824 10384.935 - 10435.348: 98.8807% ( 5) 00:06:51.824 10435.348 - 10485.760: 98.8920% ( 2) 00:06:51.824 10485.760 - 10536.172: 98.9034% ( 2) 00:06:51.824 10536.172 - 10586.585: 98.9091% ( 1) 00:06:51.824 11342.769 - 11393.182: 98.9148% ( 1) 00:06:51.824 11393.182 - 11443.594: 98.9375% ( 4) 00:06:51.824 11443.594 - 11494.006: 98.9489% ( 2) 00:06:51.824 11494.006 - 11544.418: 98.9602% ( 2) 00:06:51.824 11544.418 - 11594.831: 98.9716% ( 2) 00:06:51.824 11594.831 - 11645.243: 98.9773% ( 1) 00:06:51.824 11645.243 - 11695.655: 98.9830% ( 1) 00:06:51.824 11695.655 - 11746.068: 99.0682% ( 15) 00:06:51.824 11746.068 - 11796.480: 99.1250% ( 10) 00:06:51.824 11796.480 - 11846.892: 99.1364% ( 2) 00:06:51.824 11897.305 - 11947.717: 99.1477% ( 2) 00:06:51.824 11947.717 - 11998.129: 99.1591% ( 2) 00:06:51.824 11998.129 - 12048.542: 99.1818% ( 4) 00:06:51.824 12048.542 - 12098.954: 99.1932% ( 2) 00:06:51.824 12098.954 - 12149.366: 99.2102% ( 3) 00:06:51.824 12149.366 - 12199.778: 99.2159% ( 1) 00:06:51.824 12199.778 - 12250.191: 99.2330% ( 3) 00:06:51.824 12250.191 - 12300.603: 99.2614% ( 5) 00:06:51.824 12351.015 - 12401.428: 99.2727% ( 2) 00:06:51.824 24802.855 - 24903.680: 99.2955% ( 4) 00:06:51.824 24903.680 - 25004.505: 99.3125% ( 3) 00:06:51.824 25004.505 - 25105.329: 99.3466% ( 6) 00:06:51.824 25105.329 - 25206.154: 99.3920% ( 8) 00:06:51.824 25206.154 - 25306.978: 99.4432% ( 9) 00:06:51.824 25306.978 - 25407.803: 99.4773% ( 6) 00:06:51.824 25407.803 - 25508.628: 99.5000% ( 4) 00:06:51.824 25508.628 - 25609.452: 99.5227% ( 4) 00:06:51.824 25609.452 - 25710.277: 99.5455% ( 4) 00:06:51.824 25710.277 - 25811.102: 99.5795% ( 6) 00:06:51.824 26012.751 - 26214.400: 99.6136% ( 6) 00:06:51.824 26214.400 - 26416.049: 99.6364% ( 4) 00:06:51.824 30247.385 - 30449.034: 99.6818% ( 8) 00:06:51.824 30449.034 - 30650.683: 99.7216% ( 7) 00:06:51.824 30650.683 - 30852.332: 99.7727% ( 9) 00:06:51.824 30852.332 - 31053.982: 99.8125% ( 7) 00:06:51.824 31053.982 - 31255.631: 99.8523% ( 7) 00:06:51.824 31255.631 - 31457.280: 99.8977% ( 8) 00:06:51.824 31457.280 - 31658.929: 99.9375% ( 7) 00:06:51.824 31658.929 - 31860.578: 99.9773% ( 7) 00:06:51.824 31860.578 - 32062.228: 100.0000% ( 4) 00:06:51.824 00:06:51.824 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:51.824 ============================================================================== 00:06:51.824 Range in us Cumulative IO count 00:06:51.824 5772.209 - 5797.415: 0.0170% ( 3) 00:06:51.824 5797.415 - 5822.622: 0.0341% ( 3) 00:06:51.824 5822.622 - 5847.828: 0.0795% ( 8) 00:06:51.824 5847.828 - 5873.034: 0.1136% ( 6) 00:06:51.824 5873.034 - 5898.240: 0.1818% ( 12) 00:06:51.824 5898.240 - 5923.446: 0.3409% ( 28) 00:06:51.824 5923.446 - 5948.652: 0.3977% ( 10) 00:06:51.824 5948.652 - 5973.858: 0.5284% ( 23) 00:06:51.824 5973.858 - 5999.065: 0.6420% ( 20) 00:06:51.824 5999.065 - 6024.271: 0.7273% ( 15) 00:06:51.824 6024.271 - 6049.477: 0.8125% ( 15) 00:06:51.824 6049.477 - 6074.683: 0.9432% ( 23) 00:06:51.824 6074.683 - 6099.889: 1.0511% ( 19) 00:06:51.824 6099.889 - 6125.095: 1.1648% ( 20) 00:06:51.824 6125.095 - 6150.302: 1.3580% ( 34) 00:06:51.824 6150.302 - 6175.508: 1.6818% ( 57) 00:06:51.824 6175.508 - 6200.714: 2.0398% ( 63) 00:06:51.824 6200.714 - 6225.920: 2.3409% ( 53) 00:06:51.824 6225.920 - 6251.126: 2.9659% ( 110) 00:06:51.824 6251.126 - 6276.332: 3.3580% ( 69) 00:06:51.824 6276.332 - 6301.538: 3.9091% ( 97) 00:06:51.824 6301.538 - 6326.745: 4.3693% ( 81) 00:06:51.824 6326.745 - 6351.951: 4.8409% ( 83) 00:06:51.824 6351.951 - 6377.157: 5.5739% ( 129) 00:06:51.824 6377.157 - 6402.363: 6.4773% ( 159) 00:06:51.824 6402.363 - 6427.569: 7.5284% ( 185) 00:06:51.824 6427.569 - 6452.775: 8.6250% ( 193) 00:06:51.824 6452.775 - 6503.188: 11.4545% ( 498) 00:06:51.824 6503.188 - 6553.600: 15.3636% ( 688) 00:06:51.824 6553.600 - 6604.012: 20.6648% ( 933) 00:06:51.824 6604.012 - 6654.425: 25.8580% ( 914) 00:06:51.824 6654.425 - 6704.837: 32.1761% ( 1112) 00:06:51.824 6704.837 - 6755.249: 39.0170% ( 1204) 00:06:51.824 6755.249 - 6805.662: 44.3977% ( 947) 00:06:51.824 6805.662 - 6856.074: 49.6193% ( 919) 00:06:51.824 6856.074 - 6906.486: 53.8807% ( 750) 00:06:51.824 6906.486 - 6956.898: 56.9886% ( 547) 00:06:51.824 6956.898 - 7007.311: 60.2557% ( 575) 00:06:51.824 7007.311 - 7057.723: 62.9773% ( 479) 00:06:51.824 7057.723 - 7108.135: 65.2159% ( 394) 00:06:51.824 7108.135 - 7158.548: 67.1989% ( 349) 00:06:51.824 7158.548 - 7208.960: 69.1705% ( 347) 00:06:51.824 7208.960 - 7259.372: 70.3352% ( 205) 00:06:51.824 7259.372 - 7309.785: 71.6818% ( 237) 00:06:51.824 7309.785 - 7360.197: 72.9375% ( 221) 00:06:51.824 7360.197 - 7410.609: 74.3864% ( 255) 00:06:51.824 7410.609 - 7461.022: 75.3864% ( 176) 00:06:51.824 7461.022 - 7511.434: 76.1193% ( 129) 00:06:51.824 7511.434 - 7561.846: 77.0284% ( 160) 00:06:51.824 7561.846 - 7612.258: 78.0682% ( 183) 00:06:51.824 7612.258 - 7662.671: 79.0909% ( 180) 00:06:51.825 7662.671 - 7713.083: 80.1932% ( 194) 00:06:51.825 7713.083 - 7763.495: 80.9545% ( 134) 00:06:51.825 7763.495 - 7813.908: 81.4602% ( 89) 00:06:51.825 7813.908 - 7864.320: 82.0114% ( 97) 00:06:51.825 7864.320 - 7914.732: 82.5966% ( 103) 00:06:51.825 7914.732 - 7965.145: 83.3239% ( 128) 00:06:51.825 7965.145 - 8015.557: 83.9830% ( 116) 00:06:51.825 8015.557 - 8065.969: 84.7500% ( 135) 00:06:51.825 8065.969 - 8116.382: 85.4716% ( 127) 00:06:51.825 8116.382 - 8166.794: 86.6307% ( 204) 00:06:51.825 8166.794 - 8217.206: 87.3409% ( 125) 00:06:51.825 8217.206 - 8267.618: 88.0000% ( 116) 00:06:51.825 8267.618 - 8318.031: 88.8864% ( 156) 00:06:51.825 8318.031 - 8368.443: 89.6477% ( 134) 00:06:51.825 8368.443 - 8418.855: 90.1989% ( 97) 00:06:51.825 8418.855 - 8469.268: 90.8807% ( 120) 00:06:51.825 8469.268 - 8519.680: 91.6307% ( 132) 00:06:51.825 8519.680 - 8570.092: 92.2557% ( 110) 00:06:51.825 8570.092 - 8620.505: 92.6591% ( 71) 00:06:51.825 8620.505 - 8670.917: 93.0227% ( 64) 00:06:51.825 8670.917 - 8721.329: 93.6307% ( 107) 00:06:51.825 8721.329 - 8771.742: 94.2045% ( 101) 00:06:51.825 8771.742 - 8822.154: 95.0341% ( 146) 00:06:51.825 8822.154 - 8872.566: 95.7330% ( 123) 00:06:51.825 8872.566 - 8922.978: 96.2670% ( 94) 00:06:51.825 8922.978 - 8973.391: 96.6136% ( 61) 00:06:51.825 8973.391 - 9023.803: 96.9148% ( 53) 00:06:51.825 9023.803 - 9074.215: 97.1705% ( 45) 00:06:51.825 9074.215 - 9124.628: 97.3580% ( 33) 00:06:51.825 9124.628 - 9175.040: 97.5625% ( 36) 00:06:51.825 9175.040 - 9225.452: 97.7898% ( 40) 00:06:51.825 9225.452 - 9275.865: 97.9318% ( 25) 00:06:51.825 9275.865 - 9326.277: 98.0511% ( 21) 00:06:51.825 9326.277 - 9376.689: 98.1534% ( 18) 00:06:51.825 9376.689 - 9427.102: 98.3125% ( 28) 00:06:51.825 9427.102 - 9477.514: 98.4659% ( 27) 00:06:51.825 9477.514 - 9527.926: 98.5170% ( 9) 00:06:51.825 9527.926 - 9578.338: 98.5455% ( 5) 00:06:51.825 9578.338 - 9628.751: 98.5682% ( 4) 00:06:51.825 9628.751 - 9679.163: 98.5909% ( 4) 00:06:51.825 9679.163 - 9729.575: 98.6136% ( 4) 00:06:51.825 9729.575 - 9779.988: 98.6250% ( 2) 00:06:51.825 9779.988 - 9830.400: 98.6307% ( 1) 00:06:51.825 9830.400 - 9880.812: 98.6477% ( 3) 00:06:51.825 9880.812 - 9931.225: 98.6648% ( 3) 00:06:51.825 9931.225 - 9981.637: 98.6705% ( 1) 00:06:51.825 9981.637 - 10032.049: 98.6875% ( 3) 00:06:51.825 10032.049 - 10082.462: 98.7500% ( 11) 00:06:51.825 10082.462 - 10132.874: 98.8580% ( 19) 00:06:51.825 10132.874 - 10183.286: 98.8693% ( 2) 00:06:51.825 10233.698 - 10284.111: 98.8864% ( 3) 00:06:51.825 10284.111 - 10334.523: 98.9034% ( 3) 00:06:51.825 10334.523 - 10384.935: 98.9091% ( 1) 00:06:51.825 11645.243 - 11695.655: 98.9148% ( 1) 00:06:51.825 11695.655 - 11746.068: 98.9489% ( 6) 00:06:51.825 11746.068 - 11796.480: 98.9716% ( 4) 00:06:51.825 11796.480 - 11846.892: 99.0114% ( 7) 00:06:51.825 11846.892 - 11897.305: 99.1761% ( 29) 00:06:51.825 11897.305 - 11947.717: 99.2273% ( 9) 00:06:51.825 11947.717 - 11998.129: 99.2443% ( 3) 00:06:51.825 11998.129 - 12048.542: 99.2670% ( 4) 00:06:51.825 12048.542 - 12098.954: 99.2727% ( 1) 00:06:51.825 23290.486 - 23391.311: 99.2898% ( 3) 00:06:51.825 23391.311 - 23492.135: 99.3125% ( 4) 00:06:51.825 23492.135 - 23592.960: 99.3295% ( 3) 00:06:51.825 23592.960 - 23693.785: 99.3580% ( 5) 00:06:51.825 23693.785 - 23794.609: 99.3807% ( 4) 00:06:51.825 23794.609 - 23895.434: 99.4034% ( 4) 00:06:51.825 23895.434 - 23996.258: 99.4261% ( 4) 00:06:51.825 23996.258 - 24097.083: 99.4489% ( 4) 00:06:51.825 24097.083 - 24197.908: 99.4716% ( 4) 00:06:51.825 24197.908 - 24298.732: 99.4943% ( 4) 00:06:51.825 24298.732 - 24399.557: 99.5114% ( 3) 00:06:51.825 24399.557 - 24500.382: 99.5341% ( 4) 00:06:51.825 24500.382 - 24601.206: 99.5568% ( 4) 00:06:51.825 24601.206 - 24702.031: 99.5795% ( 4) 00:06:51.825 24702.031 - 24802.855: 99.6023% ( 4) 00:06:51.825 24802.855 - 24903.680: 99.6193% ( 3) 00:06:51.825 24903.680 - 25004.505: 99.6364% ( 3) 00:06:51.825 28432.542 - 28634.191: 99.6591% ( 4) 00:06:51.825 28634.191 - 28835.840: 99.7102% ( 9) 00:06:51.825 28835.840 - 29037.489: 99.7500% ( 7) 00:06:51.825 29037.489 - 29239.138: 99.8011% ( 9) 00:06:51.825 29239.138 - 29440.788: 99.8466% ( 8) 00:06:51.825 29440.788 - 29642.437: 99.8920% ( 8) 00:06:51.825 29642.437 - 29844.086: 99.9375% ( 8) 00:06:51.825 29844.086 - 30045.735: 99.9830% ( 8) 00:06:51.825 30045.735 - 30247.385: 100.0000% ( 3) 00:06:51.825 00:06:51.825 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:51.825 ============================================================================== 00:06:51.825 Range in us Cumulative IO count 00:06:51.825 5595.766 - 5620.972: 0.0057% ( 1) 00:06:51.825 5721.797 - 5747.003: 0.0114% ( 1) 00:06:51.825 5847.828 - 5873.034: 0.0284% ( 3) 00:06:51.825 5873.034 - 5898.240: 0.0455% ( 3) 00:06:51.825 5898.240 - 5923.446: 0.0739% ( 5) 00:06:51.825 5923.446 - 5948.652: 0.1193% ( 8) 00:06:51.825 5948.652 - 5973.858: 0.1591% ( 7) 00:06:51.825 5973.858 - 5999.065: 0.2841% ( 22) 00:06:51.825 5999.065 - 6024.271: 0.4091% ( 22) 00:06:51.825 6024.271 - 6049.477: 0.4602% ( 9) 00:06:51.825 6049.477 - 6074.683: 0.5568% ( 17) 00:06:51.825 6074.683 - 6099.889: 0.7386% ( 32) 00:06:51.825 6099.889 - 6125.095: 1.0568% ( 56) 00:06:51.825 6125.095 - 6150.302: 1.2216% ( 29) 00:06:51.825 6150.302 - 6175.508: 1.4148% ( 34) 00:06:51.825 6175.508 - 6200.714: 1.6591% ( 43) 00:06:51.825 6200.714 - 6225.920: 1.9773% ( 56) 00:06:51.825 6225.920 - 6251.126: 2.5455% ( 100) 00:06:51.825 6251.126 - 6276.332: 3.0511% ( 89) 00:06:51.825 6276.332 - 6301.538: 3.5852% ( 94) 00:06:51.825 6301.538 - 6326.745: 4.1477% ( 99) 00:06:51.825 6326.745 - 6351.951: 5.0170% ( 153) 00:06:51.825 6351.951 - 6377.157: 5.9943% ( 172) 00:06:51.825 6377.157 - 6402.363: 6.8068% ( 143) 00:06:51.825 6402.363 - 6427.569: 7.7557% ( 167) 00:06:51.825 6427.569 - 6452.775: 9.0341% ( 225) 00:06:51.825 6452.775 - 6503.188: 12.1080% ( 541) 00:06:51.825 6503.188 - 6553.600: 15.8920% ( 666) 00:06:51.825 6553.600 - 6604.012: 20.6591% ( 839) 00:06:51.825 6604.012 - 6654.425: 26.5057% ( 1029) 00:06:51.825 6654.425 - 6704.837: 32.4261% ( 1042) 00:06:51.825 6704.837 - 6755.249: 38.7955% ( 1121) 00:06:51.825 6755.249 - 6805.662: 44.4489% ( 995) 00:06:51.825 6805.662 - 6856.074: 49.5568% ( 899) 00:06:51.825 6856.074 - 6906.486: 53.3295% ( 664) 00:06:51.826 6906.486 - 6956.898: 56.7784% ( 607) 00:06:51.826 6956.898 - 7007.311: 60.0966% ( 584) 00:06:51.826 7007.311 - 7057.723: 63.5739% ( 612) 00:06:51.826 7057.723 - 7108.135: 66.2557% ( 472) 00:06:51.826 7108.135 - 7158.548: 68.1989% ( 342) 00:06:51.826 7158.548 - 7208.960: 69.7955% ( 281) 00:06:51.826 7208.960 - 7259.372: 71.5170% ( 303) 00:06:51.826 7259.372 - 7309.785: 73.1761% ( 292) 00:06:51.826 7309.785 - 7360.197: 74.3920% ( 214) 00:06:51.826 7360.197 - 7410.609: 75.5398% ( 202) 00:06:51.826 7410.609 - 7461.022: 76.3409% ( 141) 00:06:51.826 7461.022 - 7511.434: 77.0909% ( 132) 00:06:51.826 7511.434 - 7561.846: 77.8864% ( 140) 00:06:51.826 7561.846 - 7612.258: 78.4716% ( 103) 00:06:51.826 7612.258 - 7662.671: 78.9205% ( 79) 00:06:51.826 7662.671 - 7713.083: 79.7727% ( 150) 00:06:51.826 7713.083 - 7763.495: 80.4602% ( 121) 00:06:51.826 7763.495 - 7813.908: 81.1250% ( 117) 00:06:51.826 7813.908 - 7864.320: 81.7216% ( 105) 00:06:51.826 7864.320 - 7914.732: 82.2727% ( 97) 00:06:51.826 7914.732 - 7965.145: 83.0909% ( 144) 00:06:51.826 7965.145 - 8015.557: 83.8125% ( 127) 00:06:51.826 8015.557 - 8065.969: 84.8125% ( 176) 00:06:51.826 8065.969 - 8116.382: 85.8182% ( 177) 00:06:51.826 8116.382 - 8166.794: 86.5909% ( 136) 00:06:51.826 8166.794 - 8217.206: 87.3239% ( 129) 00:06:51.826 8217.206 - 8267.618: 87.9716% ( 114) 00:06:51.826 8267.618 - 8318.031: 89.1080% ( 200) 00:06:51.826 8318.031 - 8368.443: 89.8352% ( 128) 00:06:51.826 8368.443 - 8418.855: 90.3693% ( 94) 00:06:51.826 8418.855 - 8469.268: 91.1477% ( 137) 00:06:51.826 8469.268 - 8519.680: 91.6818% ( 94) 00:06:51.826 8519.680 - 8570.092: 92.1364% ( 80) 00:06:51.826 8570.092 - 8620.505: 92.6989% ( 99) 00:06:51.826 8620.505 - 8670.917: 93.2784% ( 102) 00:06:51.826 8670.917 - 8721.329: 93.7898% ( 90) 00:06:51.826 8721.329 - 8771.742: 94.3864% ( 105) 00:06:51.826 8771.742 - 8822.154: 95.0000% ( 108) 00:06:51.826 8822.154 - 8872.566: 95.4261% ( 75) 00:06:51.826 8872.566 - 8922.978: 95.7784% ( 62) 00:06:51.826 8922.978 - 8973.391: 96.0739% ( 52) 00:06:51.826 8973.391 - 9023.803: 96.4716% ( 70) 00:06:51.826 9023.803 - 9074.215: 96.7557% ( 50) 00:06:51.826 9074.215 - 9124.628: 97.0227% ( 47) 00:06:51.826 9124.628 - 9175.040: 97.2955% ( 48) 00:06:51.826 9175.040 - 9225.452: 97.6364% ( 60) 00:06:51.826 9225.452 - 9275.865: 97.7784% ( 25) 00:06:51.826 9275.865 - 9326.277: 97.9205% ( 25) 00:06:51.826 9326.277 - 9376.689: 98.0341% ( 20) 00:06:51.826 9376.689 - 9427.102: 98.1591% ( 22) 00:06:51.826 9427.102 - 9477.514: 98.3636% ( 36) 00:06:51.826 9477.514 - 9527.926: 98.4205% ( 10) 00:06:51.826 9527.926 - 9578.338: 98.4830% ( 11) 00:06:51.826 9578.338 - 9628.751: 98.5398% ( 10) 00:06:51.826 9628.751 - 9679.163: 98.5852% ( 8) 00:06:51.826 9679.163 - 9729.575: 98.6364% ( 9) 00:06:51.826 9729.575 - 9779.988: 98.6989% ( 11) 00:06:51.826 9779.988 - 9830.400: 98.7614% ( 11) 00:06:51.826 9830.400 - 9880.812: 98.7955% ( 6) 00:06:51.826 9880.812 - 9931.225: 98.8352% ( 7) 00:06:51.826 9931.225 - 9981.637: 98.8807% ( 8) 00:06:51.826 9981.637 - 10032.049: 98.8977% ( 3) 00:06:51.826 10032.049 - 10082.462: 98.9091% ( 2) 00:06:51.826 11796.480 - 11846.892: 98.9205% ( 2) 00:06:51.826 11846.892 - 11897.305: 98.9489% ( 5) 00:06:51.826 11897.305 - 11947.717: 98.9773% ( 5) 00:06:51.826 11947.717 - 11998.129: 99.0114% ( 6) 00:06:51.826 11998.129 - 12048.542: 99.0341% ( 4) 00:06:51.826 12048.542 - 12098.954: 99.1023% ( 12) 00:06:51.826 12098.954 - 12149.366: 99.1818% ( 14) 00:06:51.826 12149.366 - 12199.778: 99.1989% ( 3) 00:06:51.826 12199.778 - 12250.191: 99.2159% ( 3) 00:06:51.826 12250.191 - 12300.603: 99.2386% ( 4) 00:06:51.826 12300.603 - 12351.015: 99.2500% ( 2) 00:06:51.826 12351.015 - 12401.428: 99.2670% ( 3) 00:06:51.826 12401.428 - 12451.840: 99.2727% ( 1) 00:06:51.826 22282.240 - 22383.065: 99.2898% ( 3) 00:06:51.826 22383.065 - 22483.889: 99.3125% ( 4) 00:06:51.826 22483.889 - 22584.714: 99.3295% ( 3) 00:06:51.826 22584.714 - 22685.538: 99.3523% ( 4) 00:06:51.826 22685.538 - 22786.363: 99.3693% ( 3) 00:06:51.826 22786.363 - 22887.188: 99.3920% ( 4) 00:06:51.826 22887.188 - 22988.012: 99.4091% ( 3) 00:06:51.826 22988.012 - 23088.837: 99.4261% ( 3) 00:06:51.826 23088.837 - 23189.662: 99.4489% ( 4) 00:06:51.826 23189.662 - 23290.486: 99.4716% ( 4) 00:06:51.826 23290.486 - 23391.311: 99.4943% ( 4) 00:06:51.826 23391.311 - 23492.135: 99.5170% ( 4) 00:06:51.826 23492.135 - 23592.960: 99.5398% ( 4) 00:06:51.826 23592.960 - 23693.785: 99.5625% ( 4) 00:06:51.826 23693.785 - 23794.609: 99.5852% ( 4) 00:06:51.826 23794.609 - 23895.434: 99.6080% ( 4) 00:06:51.826 23895.434 - 23996.258: 99.6307% ( 4) 00:06:51.826 23996.258 - 24097.083: 99.6364% ( 1) 00:06:51.826 27020.997 - 27222.646: 99.6591% ( 4) 00:06:51.826 27222.646 - 27424.295: 99.7045% ( 8) 00:06:51.826 27424.295 - 27625.945: 99.7443% ( 7) 00:06:51.826 27625.945 - 27827.594: 99.7955% ( 9) 00:06:51.826 27827.594 - 28029.243: 99.8352% ( 7) 00:06:51.826 28029.243 - 28230.892: 99.8807% ( 8) 00:06:51.826 28230.892 - 28432.542: 99.9261% ( 8) 00:06:51.826 28432.542 - 28634.191: 99.9716% ( 8) 00:06:51.826 28634.191 - 28835.840: 100.0000% ( 5) 00:06:51.826 00:06:51.826 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:51.826 ============================================================================== 00:06:51.826 Range in us Cumulative IO count 00:06:51.826 5646.178 - 5671.385: 0.0057% ( 1) 00:06:51.826 5772.209 - 5797.415: 0.0114% ( 1) 00:06:51.826 5797.415 - 5822.622: 0.0170% ( 1) 00:06:51.826 5847.828 - 5873.034: 0.0284% ( 2) 00:06:51.826 5873.034 - 5898.240: 0.0682% ( 7) 00:06:51.826 5898.240 - 5923.446: 0.1136% ( 8) 00:06:51.826 5923.446 - 5948.652: 0.1420% ( 5) 00:06:51.826 5948.652 - 5973.858: 0.1761% ( 6) 00:06:51.826 5973.858 - 5999.065: 0.2841% ( 19) 00:06:51.826 5999.065 - 6024.271: 0.4318% ( 26) 00:06:51.826 6024.271 - 6049.477: 0.5000% ( 12) 00:06:51.826 6049.477 - 6074.683: 0.6307% ( 23) 00:06:51.826 6074.683 - 6099.889: 0.7727% ( 25) 00:06:51.826 6099.889 - 6125.095: 0.9034% ( 23) 00:06:51.826 6125.095 - 6150.302: 1.1989% ( 52) 00:06:51.826 6150.302 - 6175.508: 1.4489% ( 44) 00:06:51.826 6175.508 - 6200.714: 1.8466% ( 70) 00:06:51.826 6200.714 - 6225.920: 2.1080% ( 46) 00:06:51.826 6225.920 - 6251.126: 2.6136% ( 89) 00:06:51.827 6251.126 - 6276.332: 3.0341% ( 74) 00:06:51.827 6276.332 - 6301.538: 3.4375% ( 71) 00:06:51.827 6301.538 - 6326.745: 4.1023% ( 117) 00:06:51.827 6326.745 - 6351.951: 4.9375% ( 147) 00:06:51.827 6351.951 - 6377.157: 5.8750% ( 165) 00:06:51.827 6377.157 - 6402.363: 6.7500% ( 154) 00:06:51.827 6402.363 - 6427.569: 7.7898% ( 183) 00:06:51.827 6427.569 - 6452.775: 8.8068% ( 179) 00:06:51.827 6452.775 - 6503.188: 11.5000% ( 474) 00:06:51.827 6503.188 - 6553.600: 15.3011% ( 669) 00:06:51.827 6553.600 - 6604.012: 20.1989% ( 862) 00:06:51.827 6604.012 - 6654.425: 25.6989% ( 968) 00:06:51.827 6654.425 - 6704.837: 32.3977% ( 1179) 00:06:51.827 6704.837 - 6755.249: 38.9659% ( 1156) 00:06:51.827 6755.249 - 6805.662: 44.3466% ( 947) 00:06:51.827 6805.662 - 6856.074: 48.8352% ( 790) 00:06:51.827 6856.074 - 6906.486: 53.0909% ( 749) 00:06:51.827 6906.486 - 6956.898: 56.2102% ( 549) 00:06:51.827 6956.898 - 7007.311: 59.2784% ( 540) 00:06:51.827 7007.311 - 7057.723: 62.7045% ( 603) 00:06:51.827 7057.723 - 7108.135: 65.6989% ( 527) 00:06:51.827 7108.135 - 7158.548: 67.6193% ( 338) 00:06:51.827 7158.548 - 7208.960: 70.1648% ( 448) 00:06:51.827 7208.960 - 7259.372: 72.1705% ( 353) 00:06:51.827 7259.372 - 7309.785: 73.5341% ( 240) 00:06:51.827 7309.785 - 7360.197: 74.7727% ( 218) 00:06:51.827 7360.197 - 7410.609: 75.8920% ( 197) 00:06:51.827 7410.609 - 7461.022: 76.7670% ( 154) 00:06:51.827 7461.022 - 7511.434: 77.5511% ( 138) 00:06:51.827 7511.434 - 7561.846: 78.2898% ( 130) 00:06:51.827 7561.846 - 7612.258: 78.7216% ( 76) 00:06:51.827 7612.258 - 7662.671: 79.0966% ( 66) 00:06:51.827 7662.671 - 7713.083: 79.4943% ( 70) 00:06:51.827 7713.083 - 7763.495: 80.2443% ( 132) 00:06:51.827 7763.495 - 7813.908: 80.9375% ( 122) 00:06:51.827 7813.908 - 7864.320: 81.4773% ( 95) 00:06:51.827 7864.320 - 7914.732: 82.0795% ( 106) 00:06:51.827 7914.732 - 7965.145: 82.8068% ( 128) 00:06:51.827 7965.145 - 8015.557: 83.7614% ( 168) 00:06:51.827 8015.557 - 8065.969: 84.5000% ( 130) 00:06:51.827 8065.969 - 8116.382: 85.2159% ( 126) 00:06:51.827 8116.382 - 8166.794: 85.8977% ( 120) 00:06:51.827 8166.794 - 8217.206: 86.8239% ( 163) 00:06:51.827 8217.206 - 8267.618: 87.7273% ( 159) 00:06:51.827 8267.618 - 8318.031: 88.7614% ( 182) 00:06:51.827 8318.031 - 8368.443: 89.7898% ( 181) 00:06:51.827 8368.443 - 8418.855: 90.8352% ( 184) 00:06:51.827 8418.855 - 8469.268: 91.7557% ( 162) 00:06:51.827 8469.268 - 8519.680: 92.3977% ( 113) 00:06:51.827 8519.680 - 8570.092: 93.1420% ( 131) 00:06:51.827 8570.092 - 8620.505: 93.7500% ( 107) 00:06:51.827 8620.505 - 8670.917: 94.2216% ( 83) 00:06:51.827 8670.917 - 8721.329: 94.7557% ( 94) 00:06:51.827 8721.329 - 8771.742: 95.1136% ( 63) 00:06:51.827 8771.742 - 8822.154: 95.4261% ( 55) 00:06:51.827 8822.154 - 8872.566: 95.7386% ( 55) 00:06:51.827 8872.566 - 8922.978: 96.0966% ( 63) 00:06:51.827 8922.978 - 8973.391: 96.6136% ( 91) 00:06:51.827 8973.391 - 9023.803: 96.9773% ( 64) 00:06:51.827 9023.803 - 9074.215: 97.1818% ( 36) 00:06:51.827 9074.215 - 9124.628: 97.3977% ( 38) 00:06:51.827 9124.628 - 9175.040: 97.6364% ( 42) 00:06:51.827 9175.040 - 9225.452: 97.8011% ( 29) 00:06:51.827 9225.452 - 9275.865: 97.9375% ( 24) 00:06:51.827 9275.865 - 9326.277: 98.0398% ( 18) 00:06:51.827 9326.277 - 9376.689: 98.1023% ( 11) 00:06:51.827 9376.689 - 9427.102: 98.1477% ( 8) 00:06:51.827 9427.102 - 9477.514: 98.1932% ( 8) 00:06:51.827 9477.514 - 9527.926: 98.2727% ( 14) 00:06:51.827 9527.926 - 9578.338: 98.4205% ( 26) 00:06:51.827 9578.338 - 9628.751: 98.4489% ( 5) 00:06:51.827 9628.751 - 9679.163: 98.4830% ( 6) 00:06:51.827 9679.163 - 9729.575: 98.5114% ( 5) 00:06:51.827 9729.575 - 9779.988: 98.5227% ( 2) 00:06:51.827 9779.988 - 9830.400: 98.5398% ( 3) 00:06:51.827 9830.400 - 9880.812: 98.5568% ( 3) 00:06:51.827 9880.812 - 9931.225: 98.5682% ( 2) 00:06:51.827 9931.225 - 9981.637: 98.5852% ( 3) 00:06:51.827 9981.637 - 10032.049: 98.5909% ( 1) 00:06:51.827 10032.049 - 10082.462: 98.6023% ( 2) 00:06:51.827 10082.462 - 10132.874: 98.6080% ( 1) 00:06:51.827 10132.874 - 10183.286: 98.6193% ( 2) 00:06:51.827 10183.286 - 10233.698: 98.6307% ( 2) 00:06:51.827 10233.698 - 10284.111: 98.6420% ( 2) 00:06:51.827 10284.111 - 10334.523: 98.6534% ( 2) 00:06:51.827 10334.523 - 10384.935: 98.6705% ( 3) 00:06:51.827 10384.935 - 10435.348: 98.6761% ( 1) 00:06:51.827 10435.348 - 10485.760: 98.6932% ( 3) 00:06:51.827 10485.760 - 10536.172: 98.7102% ( 3) 00:06:51.827 10536.172 - 10586.585: 98.7386% ( 5) 00:06:51.827 10586.585 - 10636.997: 98.7727% ( 6) 00:06:51.827 10636.997 - 10687.409: 98.8068% ( 6) 00:06:51.827 10687.409 - 10737.822: 98.8352% ( 5) 00:06:51.827 10737.822 - 10788.234: 98.8523% ( 3) 00:06:51.827 10788.234 - 10838.646: 98.8750% ( 4) 00:06:51.827 10838.646 - 10889.058: 98.8977% ( 4) 00:06:51.827 10889.058 - 10939.471: 98.9091% ( 2) 00:06:51.827 11796.480 - 11846.892: 98.9205% ( 2) 00:06:51.827 11846.892 - 11897.305: 98.9489% ( 5) 00:06:51.827 11897.305 - 11947.717: 98.9773% ( 5) 00:06:51.827 11947.717 - 11998.129: 99.0114% ( 6) 00:06:51.827 11998.129 - 12048.542: 99.0511% ( 7) 00:06:51.827 12048.542 - 12098.954: 99.1591% ( 19) 00:06:51.827 12098.954 - 12149.366: 99.1875% ( 5) 00:06:51.827 12149.366 - 12199.778: 99.2045% ( 3) 00:06:51.827 12199.778 - 12250.191: 99.2216% ( 3) 00:06:51.827 12250.191 - 12300.603: 99.2386% ( 3) 00:06:51.827 12300.603 - 12351.015: 99.2614% ( 4) 00:06:51.827 12351.015 - 12401.428: 99.2727% ( 2) 00:06:51.827 20568.222 - 20669.046: 99.2784% ( 1) 00:06:51.827 20669.046 - 20769.871: 99.3011% ( 4) 00:06:51.827 20769.871 - 20870.695: 99.3239% ( 4) 00:06:51.827 20870.695 - 20971.520: 99.3466% ( 4) 00:06:51.827 20971.520 - 21072.345: 99.3693% ( 4) 00:06:51.827 21072.345 - 21173.169: 99.3920% ( 4) 00:06:51.827 21173.169 - 21273.994: 99.4148% ( 4) 00:06:51.827 21273.994 - 21374.818: 99.4375% ( 4) 00:06:51.827 21374.818 - 21475.643: 99.4602% ( 4) 00:06:51.827 21475.643 - 21576.468: 99.4830% ( 4) 00:06:51.827 21576.468 - 21677.292: 99.5057% ( 4) 00:06:51.827 21677.292 - 21778.117: 99.5284% ( 4) 00:06:51.827 21778.117 - 21878.942: 99.5511% ( 4) 00:06:51.827 21878.942 - 21979.766: 99.5739% ( 4) 00:06:51.827 21979.766 - 22080.591: 99.5966% ( 4) 00:06:51.827 22080.591 - 22181.415: 99.6193% ( 4) 00:06:51.827 22181.415 - 22282.240: 99.6364% ( 3) 00:06:51.827 25306.978 - 25407.803: 99.6477% ( 2) 00:06:51.827 25407.803 - 25508.628: 99.6705% ( 4) 00:06:51.827 25508.628 - 25609.452: 99.6932% ( 4) 00:06:51.827 25609.452 - 25710.277: 99.7159% ( 4) 00:06:51.827 25710.277 - 25811.102: 99.7386% ( 4) 00:06:51.827 25811.102 - 26012.751: 99.7898% ( 9) 00:06:51.827 26012.751 - 26214.400: 99.8352% ( 8) 00:06:51.827 26214.400 - 26416.049: 99.8807% ( 8) 00:06:51.828 26416.049 - 26617.698: 99.9261% ( 8) 00:06:51.828 26617.698 - 26819.348: 99.9659% ( 7) 00:06:51.828 26819.348 - 27020.997: 100.0000% ( 6) 00:06:51.828 00:06:51.828 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:51.828 ============================================================================== 00:06:51.828 Range in us Cumulative IO count 00:06:51.828 5671.385 - 5696.591: 0.0057% ( 1) 00:06:51.828 5847.828 - 5873.034: 0.0284% ( 4) 00:06:51.828 5873.034 - 5898.240: 0.0398% ( 2) 00:06:51.828 5898.240 - 5923.446: 0.0682% ( 5) 00:06:51.828 5923.446 - 5948.652: 0.1193% ( 9) 00:06:51.828 5948.652 - 5973.858: 0.1591% ( 7) 00:06:51.828 5973.858 - 5999.065: 0.2102% ( 9) 00:06:51.828 5999.065 - 6024.271: 0.2670% ( 10) 00:06:51.828 6024.271 - 6049.477: 0.3807% ( 20) 00:06:51.828 6049.477 - 6074.683: 0.5795% ( 35) 00:06:51.828 6074.683 - 6099.889: 0.6932% ( 20) 00:06:51.828 6099.889 - 6125.095: 0.9091% ( 38) 00:06:51.828 6125.095 - 6150.302: 1.1193% ( 37) 00:06:51.828 6150.302 - 6175.508: 1.5057% ( 68) 00:06:51.828 6175.508 - 6200.714: 1.8409% ( 59) 00:06:51.828 6200.714 - 6225.920: 2.1023% ( 46) 00:06:51.828 6225.920 - 6251.126: 2.4148% ( 55) 00:06:51.828 6251.126 - 6276.332: 2.8750% ( 81) 00:06:51.828 6276.332 - 6301.538: 3.3352% ( 81) 00:06:51.828 6301.538 - 6326.745: 3.7727% ( 77) 00:06:51.828 6326.745 - 6351.951: 4.4091% ( 112) 00:06:51.828 6351.951 - 6377.157: 5.4432% ( 182) 00:06:51.828 6377.157 - 6402.363: 6.2500% ( 142) 00:06:51.828 6402.363 - 6427.569: 7.1875% ( 165) 00:06:51.828 6427.569 - 6452.775: 8.2727% ( 191) 00:06:51.828 6452.775 - 6503.188: 11.2784% ( 529) 00:06:51.828 6503.188 - 6553.600: 15.1080% ( 674) 00:06:51.828 6553.600 - 6604.012: 19.1420% ( 710) 00:06:51.828 6604.012 - 6654.425: 25.5455% ( 1127) 00:06:51.828 6654.425 - 6704.837: 31.6989% ( 1083) 00:06:51.828 6704.837 - 6755.249: 37.9602% ( 1102) 00:06:51.828 6755.249 - 6805.662: 44.6136% ( 1171) 00:06:51.828 6805.662 - 6856.074: 49.2727% ( 820) 00:06:51.828 6856.074 - 6906.486: 53.1023% ( 674) 00:06:51.828 6906.486 - 6956.898: 56.7727% ( 646) 00:06:51.828 6956.898 - 7007.311: 60.2159% ( 606) 00:06:51.828 7007.311 - 7057.723: 62.3636% ( 378) 00:06:51.828 7057.723 - 7108.135: 65.3239% ( 521) 00:06:51.828 7108.135 - 7158.548: 67.1534% ( 322) 00:06:51.828 7158.548 - 7208.960: 68.9148% ( 310) 00:06:51.828 7208.960 - 7259.372: 70.9375% ( 356) 00:06:51.828 7259.372 - 7309.785: 72.5966% ( 292) 00:06:51.828 7309.785 - 7360.197: 74.4830% ( 332) 00:06:51.828 7360.197 - 7410.609: 75.7330% ( 220) 00:06:51.828 7410.609 - 7461.022: 76.5625% ( 146) 00:06:51.828 7461.022 - 7511.434: 77.5284% ( 170) 00:06:51.828 7511.434 - 7561.846: 78.0966% ( 100) 00:06:51.828 7561.846 - 7612.258: 78.7045% ( 107) 00:06:51.828 7612.258 - 7662.671: 79.0852% ( 67) 00:06:51.828 7662.671 - 7713.083: 79.6193% ( 94) 00:06:51.828 7713.083 - 7763.495: 80.1136% ( 87) 00:06:51.828 7763.495 - 7813.908: 80.5625% ( 79) 00:06:51.828 7813.908 - 7864.320: 81.2330% ( 118) 00:06:51.828 7864.320 - 7914.732: 81.8864% ( 115) 00:06:51.828 7914.732 - 7965.145: 82.6818% ( 140) 00:06:51.828 7965.145 - 8015.557: 83.5739% ( 157) 00:06:51.828 8015.557 - 8065.969: 84.3977% ( 145) 00:06:51.828 8065.969 - 8116.382: 85.4318% ( 182) 00:06:51.828 8116.382 - 8166.794: 86.6705% ( 218) 00:06:51.828 8166.794 - 8217.206: 87.6648% ( 175) 00:06:51.828 8217.206 - 8267.618: 88.9034% ( 218) 00:06:51.828 8267.618 - 8318.031: 89.7102% ( 142) 00:06:51.828 8318.031 - 8368.443: 90.4205% ( 125) 00:06:51.828 8368.443 - 8418.855: 91.2614% ( 148) 00:06:51.828 8418.855 - 8469.268: 92.0455% ( 138) 00:06:51.828 8469.268 - 8519.680: 92.7500% ( 124) 00:06:51.828 8519.680 - 8570.092: 93.5227% ( 136) 00:06:51.828 8570.092 - 8620.505: 94.0909% ( 100) 00:06:51.828 8620.505 - 8670.917: 94.4432% ( 62) 00:06:51.828 8670.917 - 8721.329: 95.0284% ( 103) 00:06:51.828 8721.329 - 8771.742: 95.3239% ( 52) 00:06:51.828 8771.742 - 8822.154: 95.6420% ( 56) 00:06:51.828 8822.154 - 8872.566: 95.9261% ( 50) 00:06:51.828 8872.566 - 8922.978: 96.1705% ( 43) 00:06:51.828 8922.978 - 8973.391: 96.3920% ( 39) 00:06:51.828 8973.391 - 9023.803: 96.7500% ( 63) 00:06:51.828 9023.803 - 9074.215: 97.2216% ( 83) 00:06:51.828 9074.215 - 9124.628: 97.4943% ( 48) 00:06:51.828 9124.628 - 9175.040: 97.6364% ( 25) 00:06:51.828 9175.040 - 9225.452: 97.7500% ( 20) 00:06:51.828 9225.452 - 9275.865: 97.9148% ( 29) 00:06:51.828 9275.865 - 9326.277: 98.0568% ( 25) 00:06:51.828 9326.277 - 9376.689: 98.2102% ( 27) 00:06:51.828 9376.689 - 9427.102: 98.3182% ( 19) 00:06:51.828 9427.102 - 9477.514: 98.3636% ( 8) 00:06:51.828 9477.514 - 9527.926: 98.4091% ( 8) 00:06:51.828 9527.926 - 9578.338: 98.4432% ( 6) 00:06:51.828 9578.338 - 9628.751: 98.4773% ( 6) 00:06:51.828 9628.751 - 9679.163: 98.5000% ( 4) 00:06:51.828 9679.163 - 9729.575: 98.5170% ( 3) 00:06:51.828 9729.575 - 9779.988: 98.5284% ( 2) 00:06:51.828 9779.988 - 9830.400: 98.5398% ( 2) 00:06:51.828 9830.400 - 9880.812: 98.5455% ( 1) 00:06:51.828 10485.760 - 10536.172: 98.5511% ( 1) 00:06:51.828 10687.409 - 10737.822: 98.5625% ( 2) 00:06:51.828 10737.822 - 10788.234: 98.5739% ( 2) 00:06:51.828 10788.234 - 10838.646: 98.5852% ( 2) 00:06:51.828 10838.646 - 10889.058: 98.5909% ( 1) 00:06:51.828 10889.058 - 10939.471: 98.6023% ( 2) 00:06:51.828 10939.471 - 10989.883: 98.6136% ( 2) 00:06:51.828 10989.883 - 11040.295: 98.6250% ( 2) 00:06:51.828 11040.295 - 11090.708: 98.6420% ( 3) 00:06:51.828 11090.708 - 11141.120: 98.7386% ( 17) 00:06:51.828 11141.120 - 11191.532: 98.8636% ( 22) 00:06:51.828 11191.532 - 11241.945: 98.9602% ( 17) 00:06:51.828 11241.945 - 11292.357: 99.0057% ( 8) 00:06:51.828 11292.357 - 11342.769: 99.0341% ( 5) 00:06:51.828 11342.769 - 11393.182: 99.0625% ( 5) 00:06:51.828 11393.182 - 11443.594: 99.0909% ( 5) 00:06:51.828 11443.594 - 11494.006: 99.1080% ( 3) 00:06:51.828 11494.006 - 11544.418: 99.1193% ( 2) 00:06:51.828 11544.418 - 11594.831: 99.1250% ( 1) 00:06:51.828 11594.831 - 11645.243: 99.1364% ( 2) 00:06:51.828 11645.243 - 11695.655: 99.1477% ( 2) 00:06:51.828 11695.655 - 11746.068: 99.1591% ( 2) 00:06:51.828 11746.068 - 11796.480: 99.1648% ( 1) 00:06:51.828 11796.480 - 11846.892: 99.1761% ( 2) 00:06:51.828 11846.892 - 11897.305: 99.1818% ( 1) 00:06:51.828 11897.305 - 11947.717: 99.1932% ( 2) 00:06:51.828 11947.717 - 11998.129: 99.1989% ( 1) 00:06:51.828 11998.129 - 12048.542: 99.2045% ( 1) 00:06:51.828 12048.542 - 12098.954: 99.2159% ( 2) 00:06:51.828 12098.954 - 12149.366: 99.2216% ( 1) 00:06:51.828 12149.366 - 12199.778: 99.2273% ( 1) 00:06:51.828 12199.778 - 12250.191: 99.2330% ( 1) 00:06:51.828 12250.191 - 12300.603: 99.2443% ( 2) 00:06:51.828 12300.603 - 12351.015: 99.2500% ( 1) 00:06:51.828 12351.015 - 12401.428: 99.2614% ( 2) 00:06:51.828 12401.428 - 12451.840: 99.2727% ( 2) 00:06:51.828 18854.203 - 18955.028: 99.2898% ( 3) 00:06:51.828 18955.028 - 19055.852: 99.3182% ( 5) 00:06:51.828 19055.852 - 19156.677: 99.3409% ( 4) 00:06:51.828 19156.677 - 19257.502: 99.3636% ( 4) 00:06:51.828 19257.502 - 19358.326: 99.3864% ( 4) 00:06:51.828 19358.326 - 19459.151: 99.4091% ( 4) 00:06:51.828 19459.151 - 19559.975: 99.4318% ( 4) 00:06:51.828 19559.975 - 19660.800: 99.4545% ( 4) 00:06:51.828 19660.800 - 19761.625: 99.4773% ( 4) 00:06:51.828 19761.625 - 19862.449: 99.5000% ( 4) 00:06:51.828 19862.449 - 19963.274: 99.5227% ( 4) 00:06:51.828 19963.274 - 20064.098: 99.5455% ( 4) 00:06:51.828 20064.098 - 20164.923: 99.5682% ( 4) 00:06:51.828 20164.923 - 20265.748: 99.5909% ( 4) 00:06:51.828 20265.748 - 20366.572: 99.6136% ( 4) 00:06:51.828 20366.572 - 20467.397: 99.6364% ( 4) 00:06:51.828 23592.960 - 23693.785: 99.6534% ( 3) 00:06:51.828 23693.785 - 23794.609: 99.6761% ( 4) 00:06:51.828 23794.609 - 23895.434: 99.6989% ( 4) 00:06:51.828 23895.434 - 23996.258: 99.7216% ( 4) 00:06:51.828 23996.258 - 24097.083: 99.7443% ( 4) 00:06:51.828 24097.083 - 24197.908: 99.7670% ( 4) 00:06:51.828 24197.908 - 24298.732: 99.7841% ( 3) 00:06:51.828 24298.732 - 24399.557: 99.8125% ( 5) 00:06:51.828 24399.557 - 24500.382: 99.8352% ( 4) 00:06:51.828 24500.382 - 24601.206: 99.8580% ( 4) 00:06:51.828 24601.206 - 24702.031: 99.8807% ( 4) 00:06:51.828 24702.031 - 24802.855: 99.9034% ( 4) 00:06:51.828 24802.855 - 24903.680: 99.9261% ( 4) 00:06:51.828 24903.680 - 25004.505: 99.9489% ( 4) 00:06:51.828 25004.505 - 25105.329: 99.9716% ( 4) 00:06:51.828 25105.329 - 25206.154: 99.9943% ( 4) 00:06:51.828 25206.154 - 25306.978: 100.0000% ( 1) 00:06:51.828 00:06:51.828 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:51.829 ============================================================================== 00:06:51.829 Range in us Cumulative IO count 00:06:51.829 5747.003 - 5772.209: 0.0057% ( 1) 00:06:51.829 5772.209 - 5797.415: 0.0170% ( 2) 00:06:51.829 5797.415 - 5822.622: 0.0396% ( 4) 00:06:51.829 5822.622 - 5847.828: 0.0736% ( 6) 00:06:51.829 5847.828 - 5873.034: 0.0962% ( 4) 00:06:51.829 5873.034 - 5898.240: 0.1132% ( 3) 00:06:51.829 5898.240 - 5923.446: 0.1302% ( 3) 00:06:51.829 5923.446 - 5948.652: 0.1698% ( 7) 00:06:51.829 5948.652 - 5973.858: 0.2208% ( 9) 00:06:51.829 5973.858 - 5999.065: 0.3510% ( 23) 00:06:51.829 5999.065 - 6024.271: 0.4076% ( 10) 00:06:51.829 6024.271 - 6049.477: 0.4699% ( 11) 00:06:51.829 6049.477 - 6074.683: 0.5548% ( 15) 00:06:51.829 6074.683 - 6099.889: 0.6793% ( 22) 00:06:51.829 6099.889 - 6125.095: 0.8888% ( 37) 00:06:51.829 6125.095 - 6150.302: 1.1492% ( 46) 00:06:51.829 6150.302 - 6175.508: 1.3700% ( 39) 00:06:51.829 6175.508 - 6200.714: 1.6248% ( 45) 00:06:51.829 6200.714 - 6225.920: 2.1569% ( 94) 00:06:51.829 6225.920 - 6251.126: 2.6098% ( 80) 00:06:51.829 6251.126 - 6276.332: 2.9665% ( 63) 00:06:51.829 6276.332 - 6301.538: 3.3628% ( 70) 00:06:51.829 6301.538 - 6326.745: 3.9402% ( 102) 00:06:51.829 6326.745 - 6351.951: 4.5460% ( 107) 00:06:51.829 6351.951 - 6377.157: 5.2366% ( 122) 00:06:51.829 6377.157 - 6402.363: 6.0066% ( 136) 00:06:51.829 6402.363 - 6427.569: 7.3313% ( 234) 00:06:51.829 6427.569 - 6452.775: 8.6843% ( 239) 00:06:51.829 6452.775 - 6503.188: 11.7131% ( 535) 00:06:51.829 6503.188 - 6553.600: 15.1381% ( 605) 00:06:51.829 6553.600 - 6604.012: 20.2389% ( 901) 00:06:51.829 6604.012 - 6654.425: 25.7360% ( 971) 00:06:51.829 6654.425 - 6704.837: 31.9973% ( 1106) 00:06:51.829 6704.837 - 6755.249: 37.9303% ( 1048) 00:06:51.829 6755.249 - 6805.662: 43.7104% ( 1021) 00:06:51.829 6805.662 - 6856.074: 48.3073% ( 812) 00:06:51.829 6856.074 - 6906.486: 52.3664% ( 717) 00:06:51.829 6906.486 - 6956.898: 56.2104% ( 679) 00:06:51.829 6956.898 - 7007.311: 59.3580% ( 556) 00:06:51.829 7007.311 - 7057.723: 62.4377% ( 544) 00:06:51.829 7057.723 - 7108.135: 64.9570% ( 445) 00:06:51.829 7108.135 - 7158.548: 67.0460% ( 369) 00:06:51.829 7158.548 - 7208.960: 68.5745% ( 270) 00:06:51.829 7208.960 - 7259.372: 70.3351% ( 311) 00:06:51.829 7259.372 - 7309.785: 71.8467% ( 267) 00:06:51.829 7309.785 - 7360.197: 72.8091% ( 170) 00:06:51.829 7360.197 - 7410.609: 73.7602% ( 168) 00:06:51.829 7410.609 - 7461.022: 74.9717% ( 214) 00:06:51.829 7461.022 - 7511.434: 75.5661% ( 105) 00:06:51.829 7511.434 - 7561.846: 76.3247% ( 134) 00:06:51.829 7561.846 - 7612.258: 77.0607% ( 130) 00:06:51.829 7612.258 - 7662.671: 77.8306% ( 136) 00:06:51.829 7662.671 - 7713.083: 78.8723% ( 184) 00:06:51.829 7713.083 - 7763.495: 79.6365% ( 135) 00:06:51.829 7763.495 - 7813.908: 80.6556% ( 180) 00:06:51.829 7813.908 - 7864.320: 81.4878% ( 147) 00:06:51.829 7864.320 - 7914.732: 82.3370% ( 150) 00:06:51.829 7914.732 - 7965.145: 83.2994% ( 170) 00:06:51.829 7965.145 - 8015.557: 84.2108% ( 161) 00:06:51.829 8015.557 - 8065.969: 85.1053% ( 158) 00:06:51.829 8065.969 - 8116.382: 85.9828% ( 155) 00:06:51.829 8116.382 - 8166.794: 86.8829% ( 159) 00:06:51.829 8166.794 - 8217.206: 87.7887% ( 160) 00:06:51.829 8217.206 - 8267.618: 88.5756% ( 139) 00:06:51.829 8267.618 - 8318.031: 89.6003% ( 181) 00:06:51.829 8318.031 - 8368.443: 90.3646% ( 135) 00:06:51.829 8368.443 - 8418.855: 91.0836% ( 127) 00:06:51.829 8418.855 - 8469.268: 91.9101% ( 146) 00:06:51.829 8469.268 - 8519.680: 92.6857% ( 137) 00:06:51.829 8519.680 - 8570.092: 93.1782% ( 87) 00:06:51.829 8570.092 - 8620.505: 93.6198% ( 78) 00:06:51.829 8620.505 - 8670.917: 94.0387% ( 74) 00:06:51.829 8670.917 - 8721.329: 94.6898% ( 115) 00:06:51.829 8721.329 - 8771.742: 95.1200% ( 76) 00:06:51.829 8771.742 - 8822.154: 95.4314% ( 55) 00:06:51.829 8822.154 - 8872.566: 95.8786% ( 79) 00:06:51.829 8872.566 - 8922.978: 96.1390% ( 46) 00:06:51.829 8922.978 - 8973.391: 96.5410% ( 71) 00:06:51.829 8973.391 - 9023.803: 96.9203% ( 67) 00:06:51.829 9023.803 - 9074.215: 97.2260% ( 54) 00:06:51.829 9074.215 - 9124.628: 97.4864% ( 46) 00:06:51.829 9124.628 - 9175.040: 97.7298% ( 43) 00:06:51.829 9175.040 - 9225.452: 97.8770% ( 26) 00:06:51.829 9225.452 - 9275.865: 97.9789% ( 18) 00:06:51.829 9275.865 - 9326.277: 98.0752% ( 17) 00:06:51.829 9326.277 - 9376.689: 98.1884% ( 20) 00:06:51.829 9376.689 - 9427.102: 98.2846% ( 17) 00:06:51.829 9427.102 - 9477.514: 98.3696% ( 15) 00:06:51.829 9477.514 - 9527.926: 98.4432% ( 13) 00:06:51.829 9527.926 - 9578.338: 98.4998% ( 10) 00:06:51.829 9578.338 - 9628.751: 98.5111% ( 2) 00:06:51.829 9628.751 - 9679.163: 98.5224% ( 2) 00:06:51.829 9679.163 - 9729.575: 98.5394% ( 3) 00:06:51.829 9729.575 - 9779.988: 98.5507% ( 2) 00:06:51.829 10233.698 - 10284.111: 98.5677% ( 3) 00:06:51.829 10284.111 - 10334.523: 98.5960% ( 5) 00:06:51.829 10334.523 - 10384.935: 98.6187% ( 4) 00:06:51.829 10384.935 - 10435.348: 98.6413% ( 4) 00:06:51.829 10435.348 - 10485.760: 98.6696% ( 5) 00:06:51.829 10485.760 - 10536.172: 98.6923% ( 4) 00:06:51.829 10536.172 - 10586.585: 98.7489% ( 10) 00:06:51.829 10586.585 - 10636.997: 98.7828% ( 6) 00:06:51.829 10636.997 - 10687.409: 98.7885% ( 1) 00:06:51.829 10687.409 - 10737.822: 98.7998% ( 2) 00:06:51.829 10737.822 - 10788.234: 98.8111% ( 2) 00:06:51.829 10788.234 - 10838.646: 98.8225% ( 2) 00:06:51.829 10838.646 - 10889.058: 98.8338% ( 2) 00:06:51.829 10889.058 - 10939.471: 98.8508% ( 3) 00:06:51.829 10939.471 - 10989.883: 98.8791% ( 5) 00:06:51.829 10989.883 - 11040.295: 98.9130% ( 6) 00:06:51.829 11040.295 - 11090.708: 98.9527% ( 7) 00:06:51.829 11090.708 - 11141.120: 98.9980% ( 8) 00:06:51.829 11141.120 - 11191.532: 99.0489% ( 9) 00:06:51.829 11191.532 - 11241.945: 99.1055% ( 10) 00:06:51.829 11241.945 - 11292.357: 99.1508% ( 8) 00:06:51.829 11292.357 - 11342.769: 99.2074% ( 10) 00:06:51.830 11342.769 - 11393.182: 99.2301% ( 4) 00:06:51.830 11393.182 - 11443.594: 99.2584% ( 5) 00:06:51.830 11443.594 - 11494.006: 99.2754% ( 3) 00:06:51.830 12754.314 - 12804.726: 99.2867% ( 2) 00:06:51.830 12804.726 - 12855.138: 99.2980% ( 2) 00:06:51.830 12855.138 - 12905.551: 99.3150% ( 3) 00:06:51.830 12905.551 - 13006.375: 99.3376% ( 4) 00:06:51.830 13006.375 - 13107.200: 99.3603% ( 4) 00:06:51.830 13107.200 - 13208.025: 99.3829% ( 4) 00:06:51.830 13208.025 - 13308.849: 99.4056% ( 4) 00:06:51.830 13308.849 - 13409.674: 99.4339% ( 5) 00:06:51.830 13409.674 - 13510.498: 99.4565% ( 4) 00:06:51.830 13510.498 - 13611.323: 99.4792% ( 4) 00:06:51.830 13611.323 - 13712.148: 99.5018% ( 4) 00:06:51.830 13712.148 - 13812.972: 99.5245% ( 4) 00:06:51.830 13812.972 - 13913.797: 99.5471% ( 4) 00:06:51.830 13913.797 - 14014.622: 99.5754% ( 5) 00:06:51.830 14014.622 - 14115.446: 99.5981% ( 4) 00:06:51.830 14115.446 - 14216.271: 99.6207% ( 4) 00:06:51.830 14216.271 - 14317.095: 99.6377% ( 3) 00:06:51.830 18249.255 - 18350.080: 99.6433% ( 1) 00:06:51.830 18350.080 - 18450.905: 99.6716% ( 5) 00:06:51.830 18450.905 - 18551.729: 99.6943% ( 4) 00:06:51.830 18551.729 - 18652.554: 99.7169% ( 4) 00:06:51.830 18652.554 - 18753.378: 99.7396% ( 4) 00:06:51.830 18753.378 - 18854.203: 99.7622% ( 4) 00:06:51.830 18854.203 - 18955.028: 99.7905% ( 5) 00:06:51.830 18955.028 - 19055.852: 99.8132% ( 4) 00:06:51.830 19055.852 - 19156.677: 99.8302% ( 3) 00:06:51.830 19156.677 - 19257.502: 99.8528% ( 4) 00:06:51.830 19257.502 - 19358.326: 99.8755% ( 4) 00:06:51.830 19358.326 - 19459.151: 99.8981% ( 4) 00:06:51.830 19459.151 - 19559.975: 99.9207% ( 4) 00:06:51.830 19559.975 - 19660.800: 99.9434% ( 4) 00:06:51.830 19660.800 - 19761.625: 99.9660% ( 4) 00:06:51.830 19761.625 - 19862.449: 99.9887% ( 4) 00:06:51.830 19862.449 - 19963.274: 100.0000% ( 2) 00:06:51.830 00:06:51.830 01:26:59 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:06:51.830 00:06:51.830 real 0m2.544s 00:06:51.830 user 0m2.234s 00:06:51.830 sys 0m0.193s 00:06:51.830 01:26:59 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.830 01:26:59 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.830 ************************************ 00:06:51.830 END TEST nvme_perf 00:06:51.830 ************************************ 00:06:51.830 01:26:59 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:51.830 01:26:59 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.830 01:26:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.830 01:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.830 ************************************ 00:06:51.830 START TEST nvme_hello_world 00:06:51.830 ************************************ 00:06:51.830 01:27:00 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:51.830 Initializing NVMe Controllers 00:06:51.830 Attached to 0000:00:10.0 00:06:51.830 Namespace ID: 1 size: 6GB 00:06:51.830 Attached to 0000:00:11.0 00:06:51.830 Namespace ID: 1 size: 5GB 00:06:51.830 Attached to 0000:00:13.0 00:06:51.830 Namespace ID: 1 size: 1GB 00:06:51.830 Attached to 0000:00:12.0 00:06:51.830 Namespace ID: 1 size: 4GB 00:06:51.830 Namespace ID: 2 size: 4GB 00:06:51.830 Namespace ID: 3 size: 4GB 00:06:51.830 Initialization complete. 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 INFO: using host memory buffer for IO 00:06:51.830 Hello world! 00:06:51.830 00:06:51.830 real 0m0.225s 00:06:51.830 user 0m0.077s 00:06:51.830 sys 0m0.105s 00:06:51.830 01:27:00 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.830 01:27:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:51.830 ************************************ 00:06:51.830 END TEST nvme_hello_world 00:06:51.830 ************************************ 00:06:51.830 01:27:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:51.830 01:27:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.830 01:27:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.830 01:27:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.830 ************************************ 00:06:51.830 START TEST nvme_sgl 00:06:51.830 ************************************ 00:06:51.830 01:27:00 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:52.088 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:06:52.088 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:06:52.088 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:06:52.088 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:06:52.088 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:06:52.088 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:06:52.088 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:06:52.089 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:06:52.089 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:06:52.347 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:06:52.347 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:06:52.347 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:06:52.347 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:06:52.347 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:06:52.347 NVMe Readv/Writev Request test 00:06:52.347 Attached to 0000:00:10.0 00:06:52.347 Attached to 0000:00:11.0 00:06:52.347 Attached to 0000:00:13.0 00:06:52.347 Attached to 0000:00:12.0 00:06:52.347 0000:00:10.0: build_io_request_2 test passed 00:06:52.347 0000:00:10.0: build_io_request_4 test passed 00:06:52.347 0000:00:10.0: build_io_request_5 test passed 00:06:52.347 0000:00:10.0: build_io_request_6 test passed 00:06:52.347 0000:00:10.0: build_io_request_7 test passed 00:06:52.347 0000:00:10.0: build_io_request_10 test passed 00:06:52.347 0000:00:11.0: build_io_request_2 test passed 00:06:52.347 0000:00:11.0: build_io_request_4 test passed 00:06:52.347 0000:00:11.0: build_io_request_5 test passed 00:06:52.347 0000:00:11.0: build_io_request_6 test passed 00:06:52.347 0000:00:11.0: build_io_request_7 test passed 00:06:52.347 0000:00:11.0: build_io_request_10 test passed 00:06:52.347 Cleaning up... 00:06:52.347 00:06:52.347 real 0m0.319s 00:06:52.347 user 0m0.150s 00:06:52.347 sys 0m0.121s 00:06:52.347 01:27:00 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.347 01:27:00 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 END TEST nvme_sgl 00:06:52.347 ************************************ 00:06:52.347 01:27:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:52.347 01:27:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.347 01:27:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.347 01:27:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 START TEST nvme_e2edp 00:06:52.347 ************************************ 00:06:52.347 01:27:00 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:52.605 NVMe Write/Read with End-to-End data protection test 00:06:52.605 Attached to 0000:00:10.0 00:06:52.605 Attached to 0000:00:11.0 00:06:52.605 Attached to 0000:00:13.0 00:06:52.605 Attached to 0000:00:12.0 00:06:52.605 Cleaning up... 00:06:52.605 00:06:52.605 real 0m0.209s 00:06:52.605 user 0m0.078s 00:06:52.605 sys 0m0.087s 00:06:52.605 01:27:00 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.605 01:27:00 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:06:52.605 ************************************ 00:06:52.605 END TEST nvme_e2edp 00:06:52.605 ************************************ 00:06:52.605 01:27:00 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:52.605 01:27:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.605 01:27:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.605 01:27:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.605 ************************************ 00:06:52.605 START TEST nvme_reserve 00:06:52.605 ************************************ 00:06:52.605 01:27:00 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:52.863 ===================================================== 00:06:52.863 NVMe Controller at PCI bus 0, device 16, function 0 00:06:52.863 ===================================================== 00:06:52.863 Reservations: Not Supported 00:06:52.863 ===================================================== 00:06:52.863 NVMe Controller at PCI bus 0, device 17, function 0 00:06:52.863 ===================================================== 00:06:52.863 Reservations: Not Supported 00:06:52.863 ===================================================== 00:06:52.863 NVMe Controller at PCI bus 0, device 19, function 0 00:06:52.863 ===================================================== 00:06:52.863 Reservations: Not Supported 00:06:52.863 ===================================================== 00:06:52.863 NVMe Controller at PCI bus 0, device 18, function 0 00:06:52.863 ===================================================== 00:06:52.863 Reservations: Not Supported 00:06:52.863 Reservation test passed 00:06:52.863 00:06:52.863 real 0m0.225s 00:06:52.863 user 0m0.060s 00:06:52.863 sys 0m0.116s 00:06:52.863 01:27:01 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.863 01:27:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:06:52.863 ************************************ 00:06:52.863 END TEST nvme_reserve 00:06:52.863 ************************************ 00:06:52.863 01:27:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:52.863 01:27:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.863 01:27:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.863 01:27:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.863 ************************************ 00:06:52.863 START TEST nvme_err_injection 00:06:52.863 ************************************ 00:06:52.863 01:27:01 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:53.123 NVMe Error Injection test 00:06:53.123 Attached to 0000:00:10.0 00:06:53.123 Attached to 0000:00:11.0 00:06:53.123 Attached to 0000:00:13.0 00:06:53.123 Attached to 0000:00:12.0 00:06:53.124 0000:00:10.0: get features failed as expected 00:06:53.124 0000:00:11.0: get features failed as expected 00:06:53.124 0000:00:13.0: get features failed as expected 00:06:53.124 0000:00:12.0: get features failed as expected 00:06:53.124 0000:00:10.0: get features successfully as expected 00:06:53.124 0000:00:11.0: get features successfully as expected 00:06:53.124 0000:00:13.0: get features successfully as expected 00:06:53.124 0000:00:12.0: get features successfully as expected 00:06:53.124 0000:00:10.0: read failed as expected 00:06:53.124 0000:00:11.0: read failed as expected 00:06:53.124 0000:00:13.0: read failed as expected 00:06:53.124 0000:00:12.0: read failed as expected 00:06:53.124 0000:00:10.0: read successfully as expected 00:06:53.124 0000:00:11.0: read successfully as expected 00:06:53.124 0000:00:13.0: read successfully as expected 00:06:53.124 0000:00:12.0: read successfully as expected 00:06:53.124 Cleaning up... 00:06:53.124 00:06:53.124 real 0m0.220s 00:06:53.124 user 0m0.080s 00:06:53.124 sys 0m0.098s 00:06:53.124 01:27:01 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.124 01:27:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 ************************************ 00:06:53.124 END TEST nvme_err_injection 00:06:53.124 ************************************ 00:06:53.124 01:27:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:53.124 01:27:01 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:06:53.124 01:27:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.124 01:27:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.124 ************************************ 00:06:53.124 START TEST nvme_overhead 00:06:53.124 ************************************ 00:06:53.124 01:27:01 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:54.536 Initializing NVMe Controllers 00:06:54.536 Attached to 0000:00:10.0 00:06:54.536 Attached to 0000:00:11.0 00:06:54.536 Attached to 0000:00:13.0 00:06:54.536 Attached to 0000:00:12.0 00:06:54.536 Initialization complete. Launching workers. 00:06:54.536 submit (in ns) avg, min, max = 11833.8, 10551.5, 80336.2 00:06:54.536 complete (in ns) avg, min, max = 8246.9, 7320.0, 313796.9 00:06:54.536 00:06:54.536 Submit histogram 00:06:54.536 ================ 00:06:54.536 Range in us Cumulative Count 00:06:54.536 10.535 - 10.585: 0.0066% ( 1) 00:06:54.536 10.634 - 10.683: 0.0132% ( 1) 00:06:54.536 10.683 - 10.732: 0.0197% ( 1) 00:06:54.536 10.782 - 10.831: 0.0263% ( 1) 00:06:54.536 10.831 - 10.880: 0.0329% ( 1) 00:06:54.536 10.978 - 11.028: 0.0527% ( 3) 00:06:54.536 11.028 - 11.077: 0.0856% ( 5) 00:06:54.536 11.077 - 11.126: 0.1975% ( 17) 00:06:54.536 11.126 - 11.175: 0.3554% ( 24) 00:06:54.537 11.175 - 11.225: 0.5397% ( 28) 00:06:54.537 11.225 - 11.274: 0.8951% ( 54) 00:06:54.537 11.274 - 11.323: 1.3296% ( 66) 00:06:54.537 11.323 - 11.372: 1.8430% ( 78) 00:06:54.537 11.372 - 11.422: 3.3634% ( 231) 00:06:54.537 11.422 - 11.471: 7.1085% ( 569) 00:06:54.537 11.471 - 11.520: 14.1842% ( 1075) 00:06:54.537 11.520 - 11.569: 24.8799% ( 1625) 00:06:54.537 11.569 - 11.618: 38.2874% ( 2037) 00:06:54.537 11.618 - 11.668: 54.1170% ( 2405) 00:06:54.537 11.668 - 11.717: 68.9331% ( 2251) 00:06:54.537 11.717 - 11.766: 80.0500% ( 1689) 00:06:54.537 11.766 - 11.815: 86.0001% ( 904) 00:06:54.537 11.815 - 11.865: 88.7317% ( 415) 00:06:54.537 11.865 - 11.914: 90.3574% ( 247) 00:06:54.537 11.914 - 11.963: 91.3513% ( 151) 00:06:54.537 11.963 - 12.012: 92.2464% ( 136) 00:06:54.537 12.012 - 12.062: 92.9770% ( 111) 00:06:54.537 12.062 - 12.111: 93.6681% ( 105) 00:06:54.537 12.111 - 12.160: 94.3198% ( 99) 00:06:54.537 12.160 - 12.209: 94.8266% ( 77) 00:06:54.537 12.209 - 12.258: 95.3070% ( 73) 00:06:54.537 12.258 - 12.308: 95.6822% ( 57) 00:06:54.537 12.308 - 12.357: 95.9587% ( 42) 00:06:54.537 12.357 - 12.406: 96.1364% ( 27) 00:06:54.537 12.406 - 12.455: 96.2549% ( 18) 00:06:54.537 12.455 - 12.505: 96.3602% ( 16) 00:06:54.537 12.505 - 12.554: 96.4128% ( 8) 00:06:54.537 12.554 - 12.603: 96.4457% ( 5) 00:06:54.537 12.603 - 12.702: 96.5840% ( 21) 00:06:54.537 12.702 - 12.800: 96.5971% ( 2) 00:06:54.537 12.800 - 12.898: 96.6234% ( 4) 00:06:54.537 12.898 - 12.997: 96.6300% ( 1) 00:06:54.537 12.997 - 13.095: 96.6432% ( 2) 00:06:54.537 13.095 - 13.194: 96.6498% ( 1) 00:06:54.537 13.194 - 13.292: 96.6761% ( 4) 00:06:54.537 13.292 - 13.391: 96.7485% ( 11) 00:06:54.537 13.391 - 13.489: 96.8275% ( 12) 00:06:54.537 13.489 - 13.588: 96.9525% ( 19) 00:06:54.537 13.588 - 13.686: 97.1105% ( 24) 00:06:54.537 13.686 - 13.785: 97.1895% ( 12) 00:06:54.537 13.785 - 13.883: 97.3211% ( 20) 00:06:54.537 13.883 - 13.982: 97.4001% ( 12) 00:06:54.537 13.982 - 14.080: 97.5844% ( 28) 00:06:54.537 14.080 - 14.178: 97.7490% ( 25) 00:06:54.537 14.178 - 14.277: 97.8609% ( 17) 00:06:54.537 14.277 - 14.375: 97.9135% ( 8) 00:06:54.537 14.375 - 14.474: 97.9464% ( 5) 00:06:54.537 14.474 - 14.572: 97.9859% ( 6) 00:06:54.537 14.572 - 14.671: 98.0057% ( 3) 00:06:54.537 14.671 - 14.769: 98.0122% ( 1) 00:06:54.537 14.868 - 14.966: 98.0254% ( 2) 00:06:54.537 14.966 - 15.065: 98.0320% ( 1) 00:06:54.537 15.065 - 15.163: 98.0452% ( 2) 00:06:54.537 15.163 - 15.262: 98.0781% ( 5) 00:06:54.537 15.262 - 15.360: 98.0978% ( 3) 00:06:54.537 15.360 - 15.458: 98.1241% ( 4) 00:06:54.537 15.458 - 15.557: 98.1439% ( 3) 00:06:54.537 15.557 - 15.655: 98.1636% ( 3) 00:06:54.537 15.655 - 15.754: 98.1768% ( 2) 00:06:54.537 15.754 - 15.852: 98.1965% ( 3) 00:06:54.537 15.852 - 15.951: 98.2229% ( 4) 00:06:54.537 15.951 - 16.049: 98.2426% ( 3) 00:06:54.537 16.049 - 16.148: 98.2492% ( 1) 00:06:54.537 16.246 - 16.345: 98.2558% ( 1) 00:06:54.537 16.345 - 16.443: 98.2755% ( 3) 00:06:54.537 16.443 - 16.542: 98.2821% ( 1) 00:06:54.537 16.542 - 16.640: 98.3018% ( 3) 00:06:54.537 16.640 - 16.738: 98.3150% ( 2) 00:06:54.537 16.738 - 16.837: 98.3216% ( 1) 00:06:54.537 16.837 - 16.935: 98.3413% ( 3) 00:06:54.537 16.935 - 17.034: 98.3677% ( 4) 00:06:54.537 17.034 - 17.132: 98.4796% ( 17) 00:06:54.537 17.132 - 17.231: 98.5783% ( 15) 00:06:54.537 17.231 - 17.329: 98.7692% ( 29) 00:06:54.537 17.329 - 17.428: 98.9271% ( 24) 00:06:54.537 17.428 - 17.526: 99.0522% ( 19) 00:06:54.537 17.526 - 17.625: 99.1312% ( 12) 00:06:54.537 17.625 - 17.723: 99.2826% ( 23) 00:06:54.537 17.723 - 17.822: 99.3550% ( 11) 00:06:54.537 17.822 - 17.920: 99.3813% ( 4) 00:06:54.537 17.920 - 18.018: 99.4537% ( 11) 00:06:54.537 18.018 - 18.117: 99.5129% ( 9) 00:06:54.537 18.117 - 18.215: 99.5393% ( 4) 00:06:54.537 18.215 - 18.314: 99.5788% ( 6) 00:06:54.537 18.314 - 18.412: 99.6051% ( 4) 00:06:54.537 18.412 - 18.511: 99.6380% ( 5) 00:06:54.537 18.511 - 18.609: 99.6512% ( 2) 00:06:54.537 18.609 - 18.708: 99.6643% ( 2) 00:06:54.537 18.708 - 18.806: 99.6906% ( 4) 00:06:54.537 18.806 - 18.905: 99.7038% ( 2) 00:06:54.537 18.905 - 19.003: 99.7170% ( 2) 00:06:54.537 19.102 - 19.200: 99.7301% ( 2) 00:06:54.537 19.200 - 19.298: 99.7433% ( 2) 00:06:54.537 19.298 - 19.397: 99.7499% ( 1) 00:06:54.537 19.397 - 19.495: 99.7565% ( 1) 00:06:54.537 19.594 - 19.692: 99.7630% ( 1) 00:06:54.537 19.692 - 19.791: 99.7696% ( 1) 00:06:54.537 19.791 - 19.889: 99.7762% ( 1) 00:06:54.537 20.086 - 20.185: 99.7828% ( 1) 00:06:54.537 20.283 - 20.382: 99.7894% ( 1) 00:06:54.537 20.382 - 20.480: 99.7960% ( 1) 00:06:54.537 20.480 - 20.578: 99.8025% ( 1) 00:06:54.537 20.578 - 20.677: 99.8091% ( 1) 00:06:54.537 20.874 - 20.972: 99.8157% ( 1) 00:06:54.537 21.169 - 21.268: 99.8223% ( 1) 00:06:54.537 21.366 - 21.465: 99.8289% ( 1) 00:06:54.537 21.465 - 21.563: 99.8355% ( 1) 00:06:54.537 22.154 - 22.252: 99.8420% ( 1) 00:06:54.537 22.252 - 22.351: 99.8486% ( 1) 00:06:54.537 22.449 - 22.548: 99.8552% ( 1) 00:06:54.537 22.548 - 22.646: 99.8618% ( 1) 00:06:54.537 22.942 - 23.040: 99.8684% ( 1) 00:06:54.537 23.237 - 23.335: 99.8749% ( 1) 00:06:54.537 23.434 - 23.532: 99.8815% ( 1) 00:06:54.537 23.631 - 23.729: 99.8881% ( 1) 00:06:54.537 23.729 - 23.828: 99.8947% ( 1) 00:06:54.537 23.828 - 23.926: 99.9013% ( 1) 00:06:54.537 23.926 - 24.025: 99.9079% ( 1) 00:06:54.537 24.123 - 24.222: 99.9144% ( 1) 00:06:54.537 24.320 - 24.418: 99.9210% ( 1) 00:06:54.537 24.418 - 24.517: 99.9276% ( 1) 00:06:54.537 24.812 - 24.911: 99.9342% ( 1) 00:06:54.537 25.108 - 25.206: 99.9408% ( 1) 00:06:54.537 26.585 - 26.782: 99.9473% ( 1) 00:06:54.537 30.129 - 30.326: 99.9539% ( 1) 00:06:54.537 34.068 - 34.265: 99.9605% ( 1) 00:06:54.537 36.431 - 36.628: 99.9671% ( 1) 00:06:54.537 41.945 - 42.142: 99.9737% ( 1) 00:06:54.537 42.732 - 42.929: 99.9803% ( 1) 00:06:54.537 48.640 - 48.837: 99.9868% ( 1) 00:06:54.537 67.348 - 67.742: 99.9934% ( 1) 00:06:54.537 79.951 - 80.345: 100.0000% ( 1) 00:06:54.537 00:06:54.537 Complete histogram 00:06:54.537 ================== 00:06:54.537 Range in us Cumulative Count 00:06:54.537 7.286 - 7.335: 0.0132% ( 2) 00:06:54.537 7.335 - 7.385: 0.0658% ( 8) 00:06:54.537 7.385 - 7.434: 0.2040% ( 21) 00:06:54.537 7.434 - 7.483: 0.5726% ( 56) 00:06:54.537 7.483 - 7.532: 1.2374% ( 101) 00:06:54.537 7.532 - 7.582: 1.6587% ( 64) 00:06:54.537 7.582 - 7.631: 2.0009% ( 52) 00:06:54.537 7.631 - 7.680: 2.1326% ( 20) 00:06:54.537 7.680 - 7.729: 2.1984% ( 10) 00:06:54.537 7.729 - 7.778: 2.2576% ( 9) 00:06:54.537 7.778 - 7.828: 2.2774% ( 3) 00:06:54.537 7.828 - 7.877: 2.4880% ( 32) 00:06:54.537 7.877 - 7.926: 4.8509% ( 359) 00:06:54.537 7.926 - 7.975: 16.3957% ( 1754) 00:06:54.537 7.975 - 8.025: 32.3636% ( 2426) 00:06:54.537 8.025 - 8.074: 44.3691% ( 1824) 00:06:54.537 8.074 - 8.123: 62.3972% ( 2739) 00:06:54.537 8.123 - 8.172: 77.2724% ( 2260) 00:06:54.537 8.172 - 8.222: 84.6113% ( 1115) 00:06:54.537 8.222 - 8.271: 90.2192% ( 852) 00:06:54.537 8.271 - 8.320: 93.3917% ( 482) 00:06:54.537 8.320 - 8.369: 94.9714% ( 240) 00:06:54.537 8.369 - 8.418: 96.0113% ( 158) 00:06:54.537 8.418 - 8.468: 96.6958% ( 104) 00:06:54.537 8.468 - 8.517: 97.0710% ( 57) 00:06:54.537 8.517 - 8.566: 97.2751% ( 31) 00:06:54.537 8.566 - 8.615: 97.4133% ( 21) 00:06:54.537 8.615 - 8.665: 97.4923% ( 12) 00:06:54.537 8.665 - 8.714: 97.5778% ( 13) 00:06:54.537 8.714 - 8.763: 97.6371% ( 9) 00:06:54.537 8.763 - 8.812: 97.7029% ( 10) 00:06:54.537 8.812 - 8.862: 97.7950% ( 14) 00:06:54.537 8.862 - 8.911: 97.8674% ( 11) 00:06:54.537 8.911 - 8.960: 97.9333% ( 10) 00:06:54.537 8.960 - 9.009: 98.0057% ( 11) 00:06:54.537 9.009 - 9.058: 98.0517% ( 7) 00:06:54.537 9.058 - 9.108: 98.0781% ( 4) 00:06:54.537 9.108 - 9.157: 98.1110% ( 5) 00:06:54.537 9.157 - 9.206: 98.1636% ( 8) 00:06:54.537 9.206 - 9.255: 98.1834% ( 3) 00:06:54.537 9.255 - 9.305: 98.2031% ( 3) 00:06:54.537 9.305 - 9.354: 98.2229% ( 3) 00:06:54.538 9.354 - 9.403: 98.2426% ( 3) 00:06:54.538 9.403 - 9.452: 98.2492% ( 1) 00:06:54.538 9.452 - 9.502: 98.2689% ( 3) 00:06:54.538 9.551 - 9.600: 98.2755% ( 1) 00:06:54.538 9.600 - 9.649: 98.2887% ( 2) 00:06:54.538 9.649 - 9.698: 98.2953% ( 1) 00:06:54.538 9.698 - 9.748: 98.3150% ( 3) 00:06:54.538 9.748 - 9.797: 98.3216% ( 1) 00:06:54.538 9.797 - 9.846: 98.3282% ( 1) 00:06:54.538 9.846 - 9.895: 98.3348% ( 1) 00:06:54.538 9.895 - 9.945: 98.3545% ( 3) 00:06:54.538 9.945 - 9.994: 98.3677% ( 2) 00:06:54.538 9.994 - 10.043: 98.3874% ( 3) 00:06:54.538 10.092 - 10.142: 98.4006% ( 2) 00:06:54.538 10.142 - 10.191: 98.4072% ( 1) 00:06:54.538 10.240 - 10.289: 98.4203% ( 2) 00:06:54.538 10.585 - 10.634: 98.4269% ( 1) 00:06:54.538 10.634 - 10.683: 98.4335% ( 1) 00:06:54.538 10.683 - 10.732: 98.4467% ( 2) 00:06:54.538 10.782 - 10.831: 98.4532% ( 1) 00:06:54.538 10.880 - 10.929: 98.4664% ( 2) 00:06:54.538 10.929 - 10.978: 98.4927% ( 4) 00:06:54.538 10.978 - 11.028: 98.5125% ( 3) 00:06:54.538 11.028 - 11.077: 98.5191% ( 1) 00:06:54.538 11.077 - 11.126: 98.5454% ( 4) 00:06:54.538 11.175 - 11.225: 98.5585% ( 2) 00:06:54.538 11.323 - 11.372: 98.5717% ( 2) 00:06:54.538 11.372 - 11.422: 98.5783% ( 1) 00:06:54.538 11.471 - 11.520: 98.5849% ( 1) 00:06:54.538 11.569 - 11.618: 98.5980% ( 2) 00:06:54.538 11.668 - 11.717: 98.6046% ( 1) 00:06:54.538 11.717 - 11.766: 98.6178% ( 2) 00:06:54.538 11.766 - 11.815: 98.6244% ( 1) 00:06:54.538 11.865 - 11.914: 98.6309% ( 1) 00:06:54.538 12.012 - 12.062: 98.6375% ( 1) 00:06:54.538 12.062 - 12.111: 98.6441% ( 1) 00:06:54.538 12.209 - 12.258: 98.6507% ( 1) 00:06:54.538 12.258 - 12.308: 98.6573% ( 1) 00:06:54.538 12.357 - 12.406: 98.6639% ( 1) 00:06:54.538 12.505 - 12.554: 98.6704% ( 1) 00:06:54.538 12.800 - 12.898: 98.6902% ( 3) 00:06:54.538 12.898 - 12.997: 98.6968% ( 1) 00:06:54.538 12.997 - 13.095: 98.7034% ( 1) 00:06:54.538 13.292 - 13.391: 98.7165% ( 2) 00:06:54.538 13.391 - 13.489: 98.7626% ( 7) 00:06:54.538 13.489 - 13.588: 98.8218% ( 9) 00:06:54.538 13.588 - 13.686: 98.9074% ( 13) 00:06:54.538 13.686 - 13.785: 99.0193% ( 17) 00:06:54.538 13.785 - 13.883: 99.0719% ( 8) 00:06:54.538 13.883 - 13.982: 99.1378% ( 10) 00:06:54.538 13.982 - 14.080: 99.2102% ( 11) 00:06:54.538 14.080 - 14.178: 99.2760% ( 10) 00:06:54.538 14.178 - 14.277: 99.3813% ( 16) 00:06:54.538 14.277 - 14.375: 99.4669% ( 13) 00:06:54.538 14.375 - 14.474: 99.5656% ( 15) 00:06:54.538 14.474 - 14.572: 99.6248% ( 9) 00:06:54.538 14.572 - 14.671: 99.6906% ( 10) 00:06:54.538 14.671 - 14.769: 99.6972% ( 1) 00:06:54.538 14.769 - 14.868: 99.7104% ( 2) 00:06:54.538 14.868 - 14.966: 99.7367% ( 4) 00:06:54.538 14.966 - 15.065: 99.7499% ( 2) 00:06:54.538 15.163 - 15.262: 99.7630% ( 2) 00:06:54.538 15.262 - 15.360: 99.7828% ( 3) 00:06:54.538 15.458 - 15.557: 99.7894% ( 1) 00:06:54.538 15.557 - 15.655: 99.8025% ( 2) 00:06:54.538 15.655 - 15.754: 99.8157% ( 2) 00:06:54.538 16.148 - 16.246: 99.8223% ( 1) 00:06:54.538 16.246 - 16.345: 99.8289% ( 1) 00:06:54.538 16.542 - 16.640: 99.8355% ( 1) 00:06:54.538 16.738 - 16.837: 99.8420% ( 1) 00:06:54.538 17.034 - 17.132: 99.8486% ( 1) 00:06:54.538 17.132 - 17.231: 99.8552% ( 1) 00:06:54.538 17.625 - 17.723: 99.8618% ( 1) 00:06:54.538 18.215 - 18.314: 99.8684% ( 1) 00:06:54.538 18.511 - 18.609: 99.8749% ( 1) 00:06:54.538 18.806 - 18.905: 99.8815% ( 1) 00:06:54.538 19.200 - 19.298: 99.8881% ( 1) 00:06:54.538 19.692 - 19.791: 99.8947% ( 1) 00:06:54.538 20.775 - 20.874: 99.9013% ( 1) 00:06:54.538 21.662 - 21.760: 99.9079% ( 1) 00:06:54.538 21.858 - 21.957: 99.9144% ( 1) 00:06:54.538 22.843 - 22.942: 99.9210% ( 1) 00:06:54.538 23.040 - 23.138: 99.9276% ( 1) 00:06:54.538 23.434 - 23.532: 99.9342% ( 1) 00:06:54.538 29.538 - 29.735: 99.9408% ( 1) 00:06:54.538 38.794 - 38.991: 99.9473% ( 1) 00:06:54.538 39.975 - 40.172: 99.9539% ( 1) 00:06:54.538 56.714 - 57.108: 99.9605% ( 1) 00:06:54.538 63.015 - 63.409: 99.9671% ( 1) 00:06:54.538 65.378 - 65.772: 99.9737% ( 1) 00:06:54.538 68.529 - 68.923: 99.9803% ( 1) 00:06:54.538 70.892 - 71.286: 99.9868% ( 1) 00:06:54.538 299.323 - 300.898: 99.9934% ( 1) 00:06:54.538 313.502 - 315.077: 100.0000% ( 1) 00:06:54.538 00:06:54.538 ************************************ 00:06:54.538 END TEST nvme_overhead 00:06:54.538 ************************************ 00:06:54.538 00:06:54.538 real 0m1.212s 00:06:54.538 user 0m1.066s 00:06:54.538 sys 0m0.097s 00:06:54.538 01:27:02 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.538 01:27:02 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 01:27:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:06:54.538 01:27:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:54.538 01:27:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.538 01:27:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 ************************************ 00:06:54.538 START TEST nvme_arbitration 00:06:54.538 ************************************ 00:06:54.538 01:27:02 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:06:57.821 Initializing NVMe Controllers 00:06:57.821 Attached to 0000:00:10.0 00:06:57.821 Attached to 0000:00:11.0 00:06:57.821 Attached to 0000:00:13.0 00:06:57.821 Attached to 0000:00:12.0 00:06:57.821 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:06:57.821 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:06:57.821 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:06:57.821 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:06:57.821 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:06:57.821 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:06:57.821 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:06:57.821 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:06:57.821 Initialization complete. Launching workers. 00:06:57.821 Starting thread on core 1 with urgent priority queue 00:06:57.821 Starting thread on core 2 with urgent priority queue 00:06:57.821 Starting thread on core 3 with urgent priority queue 00:06:57.821 Starting thread on core 0 with urgent priority queue 00:06:57.821 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:06:57.821 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:06:57.821 QEMU NVMe Ctrl (12341 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:06:57.821 QEMU NVMe Ctrl (12342 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:06:57.821 QEMU NVMe Ctrl (12343 ) core 2: 981.33 IO/s 101.90 secs/100000 ios 00:06:57.821 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:06:57.821 ======================================================== 00:06:57.821 00:06:57.821 00:06:57.821 real 0m3.330s 00:06:57.821 user 0m9.300s 00:06:57.821 sys 0m0.124s 00:06:57.821 ************************************ 00:06:57.821 END TEST nvme_arbitration 00:06:57.821 ************************************ 00:06:57.821 01:27:05 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.821 01:27:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:06:57.821 01:27:06 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.821 ************************************ 00:06:57.821 START TEST nvme_single_aen 00:06:57.821 ************************************ 00:06:57.821 01:27:06 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:06:57.821 Asynchronous Event Request test 00:06:57.821 Attached to 0000:00:10.0 00:06:57.821 Attached to 0000:00:11.0 00:06:57.821 Attached to 0000:00:13.0 00:06:57.821 Attached to 0000:00:12.0 00:06:57.821 Reset controller to setup AER completions for this process 00:06:57.821 Registering asynchronous event callbacks... 00:06:57.821 Getting orig temperature thresholds of all controllers 00:06:57.821 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:06:57.821 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:06:57.821 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:06:57.821 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:06:57.821 Setting all controllers temperature threshold low to trigger AER 00:06:57.821 Waiting for all controllers temperature threshold to be set lower 00:06:57.821 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:06:57.821 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:06:57.821 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:06:57.821 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:06:57.821 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:06:57.821 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:06:57.821 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:06:57.821 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:06:57.821 Waiting for all controllers to trigger AER and reset threshold 00:06:57.821 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:06:57.821 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:06:57.821 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:06:57.821 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:06:57.821 Cleaning up... 00:06:57.821 ************************************ 00:06:57.821 END TEST nvme_single_aen 00:06:57.821 ************************************ 00:06:57.821 00:06:57.821 real 0m0.222s 00:06:57.821 user 0m0.076s 00:06:57.821 sys 0m0.103s 00:06:57.821 01:27:06 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.821 01:27:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:06:57.821 01:27:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.821 01:27:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.080 ************************************ 00:06:58.080 START TEST nvme_doorbell_aers 00:06:58.080 ************************************ 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:06:58.080 01:27:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:06:58.080 [2024-11-17 01:27:06.530605] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:08.058 Executing: test_write_invalid_db 00:07:08.058 Waiting for AER completion... 00:07:08.058 Failure: test_write_invalid_db 00:07:08.058 00:07:08.058 Executing: test_invalid_db_write_overflow_sq 00:07:08.058 Waiting for AER completion... 00:07:08.058 Failure: test_invalid_db_write_overflow_sq 00:07:08.058 00:07:08.058 Executing: test_invalid_db_write_overflow_cq 00:07:08.058 Waiting for AER completion... 00:07:08.058 Failure: test_invalid_db_write_overflow_cq 00:07:08.058 00:07:08.058 01:27:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:08.058 01:27:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:08.316 [2024-11-17 01:27:16.586575] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:18.288 Executing: test_write_invalid_db 00:07:18.288 Waiting for AER completion... 00:07:18.288 Failure: test_write_invalid_db 00:07:18.288 00:07:18.288 Executing: test_invalid_db_write_overflow_sq 00:07:18.288 Waiting for AER completion... 00:07:18.288 Failure: test_invalid_db_write_overflow_sq 00:07:18.288 00:07:18.288 Executing: test_invalid_db_write_overflow_cq 00:07:18.288 Waiting for AER completion... 00:07:18.288 Failure: test_invalid_db_write_overflow_cq 00:07:18.288 00:07:18.288 01:27:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:18.288 01:27:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:18.288 [2024-11-17 01:27:26.596977] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:28.261 Executing: test_write_invalid_db 00:07:28.261 Waiting for AER completion... 00:07:28.261 Failure: test_write_invalid_db 00:07:28.261 00:07:28.261 Executing: test_invalid_db_write_overflow_sq 00:07:28.261 Waiting for AER completion... 00:07:28.261 Failure: test_invalid_db_write_overflow_sq 00:07:28.261 00:07:28.261 Executing: test_invalid_db_write_overflow_cq 00:07:28.261 Waiting for AER completion... 00:07:28.261 Failure: test_invalid_db_write_overflow_cq 00:07:28.261 00:07:28.261 01:27:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:28.261 01:27:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:28.261 [2024-11-17 01:27:36.639546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 Executing: test_write_invalid_db 00:07:38.261 Waiting for AER completion... 00:07:38.261 Failure: test_write_invalid_db 00:07:38.261 00:07:38.261 Executing: test_invalid_db_write_overflow_sq 00:07:38.261 Waiting for AER completion... 00:07:38.261 Failure: test_invalid_db_write_overflow_sq 00:07:38.261 00:07:38.261 Executing: test_invalid_db_write_overflow_cq 00:07:38.261 Waiting for AER completion... 00:07:38.261 Failure: test_invalid_db_write_overflow_cq 00:07:38.261 00:07:38.261 00:07:38.261 real 0m40.180s 00:07:38.261 user 0m34.193s 00:07:38.261 sys 0m5.612s 00:07:38.261 01:27:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.261 01:27:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 ************************************ 00:07:38.261 END TEST nvme_doorbell_aers 00:07:38.261 ************************************ 00:07:38.261 01:27:46 nvme -- nvme/nvme.sh@97 -- # uname 00:07:38.261 01:27:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:38.261 01:27:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:38.261 01:27:46 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:38.261 01:27:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.261 01:27:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 ************************************ 00:07:38.261 START TEST nvme_multi_aen 00:07:38.261 ************************************ 00:07:38.261 01:27:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:38.261 [2024-11-17 01:27:46.689140] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.689200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.689209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.690456] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.690485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.690492] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.691585] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.691704] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.691762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.692739] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.692845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 [2024-11-17 01:27:46.692895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63192) is not found. Dropping the request. 00:07:38.261 Child process pid: 63718 00:07:38.520 [Child] Asynchronous Event Request test 00:07:38.520 [Child] Attached to 0000:00:10.0 00:07:38.520 [Child] Attached to 0000:00:11.0 00:07:38.520 [Child] Attached to 0000:00:13.0 00:07:38.520 [Child] Attached to 0000:00:12.0 00:07:38.520 [Child] Registering asynchronous event callbacks... 00:07:38.520 [Child] Getting orig temperature thresholds of all controllers 00:07:38.520 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:38.520 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 [Child] Cleaning up... 00:07:38.520 Asynchronous Event Request test 00:07:38.520 Attached to 0000:00:10.0 00:07:38.520 Attached to 0000:00:11.0 00:07:38.520 Attached to 0000:00:13.0 00:07:38.520 Attached to 0000:00:12.0 00:07:38.520 Reset controller to setup AER completions for this process 00:07:38.520 Registering asynchronous event callbacks... 00:07:38.520 Getting orig temperature thresholds of all controllers 00:07:38.520 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:38.520 Setting all controllers temperature threshold low to trigger AER 00:07:38.520 Waiting for all controllers temperature threshold to be set lower 00:07:38.520 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:38.520 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:38.520 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:38.520 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:38.520 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:38.520 Waiting for all controllers to trigger AER and reset threshold 00:07:38.520 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.520 Cleaning up... 00:07:38.520 00:07:38.520 real 0m0.450s 00:07:38.520 user 0m0.156s 00:07:38.520 sys 0m0.193s 00:07:38.520 01:27:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.520 01:27:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:38.520 ************************************ 00:07:38.520 END TEST nvme_multi_aen 00:07:38.520 ************************************ 00:07:38.778 01:27:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:38.778 01:27:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:38.778 01:27:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.778 01:27:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 ************************************ 00:07:38.778 START TEST nvme_startup 00:07:38.778 ************************************ 00:07:38.778 01:27:47 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:38.778 Initializing NVMe Controllers 00:07:38.778 Attached to 0000:00:10.0 00:07:38.778 Attached to 0000:00:11.0 00:07:38.779 Attached to 0000:00:13.0 00:07:38.779 Attached to 0000:00:12.0 00:07:38.779 Initialization complete. 00:07:38.779 Time used:164617.516 (us). 00:07:38.779 00:07:38.779 real 0m0.231s 00:07:38.779 user 0m0.070s 00:07:38.779 sys 0m0.103s 00:07:38.779 ************************************ 00:07:38.779 END TEST nvme_startup 00:07:38.779 ************************************ 00:07:38.779 01:27:47 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.779 01:27:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 01:27:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:39.037 01:27:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.037 01:27:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.037 01:27:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 ************************************ 00:07:39.037 START TEST nvme_multi_secondary 00:07:39.037 ************************************ 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63768 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63769 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:39.037 01:27:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:42.356 Initializing NVMe Controllers 00:07:42.356 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.356 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:42.356 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:42.356 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:42.356 Initialization complete. Launching workers. 00:07:42.356 ======================================================== 00:07:42.356 Latency(us) 00:07:42.356 Device Information : IOPS MiB/s Average min max 00:07:42.356 PCIE (0000:00:10.0) NSID 1 from core 1: 7454.43 29.12 2145.01 851.17 6942.86 00:07:42.356 PCIE (0000:00:11.0) NSID 1 from core 1: 7454.43 29.12 2145.96 1014.76 6637.83 00:07:42.356 PCIE (0000:00:13.0) NSID 1 from core 1: 7454.43 29.12 2145.93 926.63 6559.79 00:07:42.356 PCIE (0000:00:12.0) NSID 1 from core 1: 7454.43 29.12 2145.92 990.92 6253.30 00:07:42.356 PCIE (0000:00:12.0) NSID 2 from core 1: 7454.43 29.12 2145.90 995.11 6349.47 00:07:42.356 PCIE (0000:00:12.0) NSID 3 from core 1: 7454.43 29.12 2145.87 1003.97 6243.02 00:07:42.356 ======================================================== 00:07:42.356 Total : 44726.58 174.71 2145.76 851.17 6942.86 00:07:42.356 00:07:42.356 Initializing NVMe Controllers 00:07:42.356 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.356 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.356 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:42.356 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:42.356 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:42.356 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:42.356 Initialization complete. Launching workers. 00:07:42.356 ======================================================== 00:07:42.356 Latency(us) 00:07:42.356 Device Information : IOPS MiB/s Average min max 00:07:42.356 PCIE (0000:00:10.0) NSID 1 from core 2: 3157.13 12.33 5066.20 1321.92 19033.10 00:07:42.356 PCIE (0000:00:11.0) NSID 1 from core 2: 3157.13 12.33 5067.34 1286.38 18426.04 00:07:42.356 PCIE (0000:00:13.0) NSID 1 from core 2: 3157.13 12.33 5067.36 1278.46 14928.41 00:07:42.356 PCIE (0000:00:12.0) NSID 1 from core 2: 3157.13 12.33 5066.83 1158.91 18729.50 00:07:42.356 PCIE (0000:00:12.0) NSID 2 from core 2: 3157.13 12.33 5066.76 1095.76 17437.20 00:07:42.356 PCIE (0000:00:12.0) NSID 3 from core 2: 3157.13 12.33 5067.03 959.25 18712.55 00:07:42.356 ======================================================== 00:07:42.356 Total : 18942.79 74.00 5066.92 959.25 19033.10 00:07:42.356 00:07:42.356 01:27:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63768 00:07:44.271 Initializing NVMe Controllers 00:07:44.271 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:44.271 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.271 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:44.271 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:44.271 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:44.271 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:44.271 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:44.271 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:44.271 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:44.271 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:44.271 Initialization complete. Launching workers. 00:07:44.271 ======================================================== 00:07:44.271 Latency(us) 00:07:44.271 Device Information : IOPS MiB/s Average min max 00:07:44.271 PCIE (0000:00:10.0) NSID 1 from core 0: 10613.19 41.46 1506.32 659.52 6403.03 00:07:44.271 PCIE (0000:00:11.0) NSID 1 from core 0: 10613.19 41.46 1507.16 670.33 5625.59 00:07:44.271 PCIE (0000:00:13.0) NSID 1 from core 0: 10613.19 41.46 1507.12 675.86 5219.26 00:07:44.271 PCIE (0000:00:12.0) NSID 1 from core 0: 10613.19 41.46 1507.10 676.26 5402.62 00:07:44.271 PCIE (0000:00:12.0) NSID 2 from core 0: 10613.19 41.46 1507.08 625.28 5938.85 00:07:44.271 PCIE (0000:00:12.0) NSID 3 from core 0: 10613.19 41.46 1507.06 589.14 5916.34 00:07:44.271 ======================================================== 00:07:44.271 Total : 63679.14 248.75 1506.98 589.14 6403.03 00:07:44.271 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63769 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63838 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63839 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:44.271 01:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:47.569 Initializing NVMe Controllers 00:07:47.569 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.569 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:47.569 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:47.569 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:47.569 Initialization complete. Launching workers. 00:07:47.569 ======================================================== 00:07:47.569 Latency(us) 00:07:47.569 Device Information : IOPS MiB/s Average min max 00:07:47.569 PCIE (0000:00:10.0) NSID 1 from core 0: 7737.18 30.22 2066.62 697.40 5826.70 00:07:47.569 PCIE (0000:00:11.0) NSID 1 from core 0: 7737.18 30.22 2067.77 720.32 5889.36 00:07:47.569 PCIE (0000:00:13.0) NSID 1 from core 0: 7737.18 30.22 2067.79 721.99 5655.27 00:07:47.569 PCIE (0000:00:12.0) NSID 1 from core 0: 7737.18 30.22 2067.74 715.06 5667.05 00:07:47.569 PCIE (0000:00:12.0) NSID 2 from core 0: 7737.18 30.22 2067.71 721.47 5344.00 00:07:47.569 PCIE (0000:00:12.0) NSID 3 from core 0: 7737.18 30.22 2067.78 723.24 6037.98 00:07:47.569 ======================================================== 00:07:47.569 Total : 46423.09 181.34 2067.57 697.40 6037.98 00:07:47.569 00:07:47.569 Initializing NVMe Controllers 00:07:47.569 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.569 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.569 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:47.569 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:47.569 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:47.569 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:47.569 Initialization complete. Launching workers. 00:07:47.569 ======================================================== 00:07:47.569 Latency(us) 00:07:47.569 Device Information : IOPS MiB/s Average min max 00:07:47.569 PCIE (0000:00:10.0) NSID 1 from core 1: 7528.32 29.41 2123.96 707.85 7645.07 00:07:47.569 PCIE (0000:00:11.0) NSID 1 from core 1: 7528.32 29.41 2124.94 727.62 7615.80 00:07:47.569 PCIE (0000:00:13.0) NSID 1 from core 1: 7528.32 29.41 2124.99 718.54 7683.59 00:07:47.570 PCIE (0000:00:12.0) NSID 1 from core 1: 7528.32 29.41 2124.98 716.16 7725.69 00:07:47.570 PCIE (0000:00:12.0) NSID 2 from core 1: 7528.32 29.41 2124.97 718.34 7852.38 00:07:47.570 PCIE (0000:00:12.0) NSID 3 from core 1: 7528.32 29.41 2124.95 732.25 7765.52 00:07:47.570 ======================================================== 00:07:47.570 Total : 45169.94 176.45 2124.80 707.85 7852.38 00:07:47.570 00:07:49.477 Initializing NVMe Controllers 00:07:49.477 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:49.477 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:49.477 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.477 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:49.477 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:49.477 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:49.477 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:49.477 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:49.477 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:49.477 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:49.477 Initialization complete. Launching workers. 00:07:49.477 ======================================================== 00:07:49.477 Latency(us) 00:07:49.477 Device Information : IOPS MiB/s Average min max 00:07:49.477 PCIE (0000:00:10.0) NSID 1 from core 2: 4623.38 18.06 3458.88 732.49 12865.83 00:07:49.477 PCIE (0000:00:11.0) NSID 1 from core 2: 4623.38 18.06 3460.30 709.49 12787.36 00:07:49.477 PCIE (0000:00:13.0) NSID 1 from core 2: 4623.38 18.06 3459.94 742.39 13241.96 00:07:49.477 PCIE (0000:00:12.0) NSID 1 from core 2: 4623.38 18.06 3460.10 755.21 13042.73 00:07:49.477 PCIE (0000:00:12.0) NSID 2 from core 2: 4623.38 18.06 3460.26 743.45 12470.25 00:07:49.477 PCIE (0000:00:12.0) NSID 3 from core 2: 4623.38 18.06 3458.86 756.26 12842.77 00:07:49.477 ======================================================== 00:07:49.477 Total : 27740.25 108.36 3459.72 709.49 13241.96 00:07:49.477 00:07:49.737 ************************************ 00:07:49.737 END TEST nvme_multi_secondary 00:07:49.737 ************************************ 00:07:49.737 01:27:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63838 00:07:49.737 01:27:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63839 00:07:49.737 00:07:49.737 real 0m10.670s 00:07:49.737 user 0m18.436s 00:07:49.737 sys 0m0.639s 00:07:49.737 01:27:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.737 01:27:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:07:49.737 01:27:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:07:49.737 01:27:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:07:49.737 01:27:57 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62801 ]] 00:07:49.737 01:27:57 nvme -- common/autotest_common.sh@1094 -- # kill 62801 00:07:49.737 01:27:57 nvme -- common/autotest_common.sh@1095 -- # wait 62801 00:07:49.737 [2024-11-17 01:27:57.982525] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.982618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.982654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.982677] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.986037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.986108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.986129] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.986152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.989417] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.989479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.989500] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.989522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.993475] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.993603] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.993637] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 [2024-11-17 01:27:57.993668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63717) is not found. Dropping the request. 00:07:49.737 01:27:58 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:07:49.737 01:27:58 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:07:49.737 01:27:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:49.737 01:27:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.737 01:27:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.737 01:27:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.737 ************************************ 00:07:49.737 START TEST bdev_nvme_reset_stuck_adm_cmd 00:07:49.737 ************************************ 00:07:49.737 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:49.998 * Looking for test storage... 00:07:49.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.998 --rc genhtml_branch_coverage=1 00:07:49.998 --rc genhtml_function_coverage=1 00:07:49.998 --rc genhtml_legend=1 00:07:49.998 --rc geninfo_all_blocks=1 00:07:49.998 --rc geninfo_unexecuted_blocks=1 00:07:49.998 00:07:49.998 ' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.998 --rc genhtml_branch_coverage=1 00:07:49.998 --rc genhtml_function_coverage=1 00:07:49.998 --rc genhtml_legend=1 00:07:49.998 --rc geninfo_all_blocks=1 00:07:49.998 --rc geninfo_unexecuted_blocks=1 00:07:49.998 00:07:49.998 ' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.998 --rc genhtml_branch_coverage=1 00:07:49.998 --rc genhtml_function_coverage=1 00:07:49.998 --rc genhtml_legend=1 00:07:49.998 --rc geninfo_all_blocks=1 00:07:49.998 --rc geninfo_unexecuted_blocks=1 00:07:49.998 00:07:49.998 ' 00:07:49.998 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.998 --rc genhtml_branch_coverage=1 00:07:49.998 --rc genhtml_function_coverage=1 00:07:49.999 --rc genhtml_legend=1 00:07:49.999 --rc geninfo_all_blocks=1 00:07:49.999 --rc geninfo_unexecuted_blocks=1 00:07:49.999 00:07:49.999 ' 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:07:49.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64001 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64001 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64001 ']' 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.999 01:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:49.999 [2024-11-17 01:27:58.434451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:49.999 [2024-11-17 01:27:58.434751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64001 ] 00:07:50.259 [2024-11-17 01:27:58.615695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.520 [2024-11-17 01:27:58.756936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.520 [2024-11-17 01:27:58.757445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.520 [2024-11-17 01:27:58.757878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.520 [2024-11-17 01:27:58.757884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:51.092 nvme0n1 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_br9DJ.txt 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:51.092 true 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731806879 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64024 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:07:51.092 01:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:07:53.003 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:07:53.003 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.003 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:53.264 [2024-11-17 01:28:01.461979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:53.264 [2024-11-17 01:28:01.462235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:07:53.264 [2024-11-17 01:28:01.462259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:07:53.264 [2024-11-17 01:28:01.462272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.264 [2024-11-17 01:28:01.466464] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:53.264 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64024 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64024 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64024 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_br9DJ.txt 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:53.264 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_br9DJ.txt 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64001 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64001 ']' 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64001 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64001 00:07:53.265 killing process with pid 64001 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64001' 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64001 00:07:53.265 01:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64001 00:07:54.651 01:28:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:07:54.651 01:28:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:07:54.651 00:07:54.651 real 0m4.930s 00:07:54.651 user 0m17.343s 00:07:54.651 sys 0m0.516s 00:07:54.651 ************************************ 00:07:54.651 END TEST bdev_nvme_reset_stuck_adm_cmd 00:07:54.651 ************************************ 00:07:54.651 01:28:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.651 01:28:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:54.912 01:28:03 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:07:54.912 01:28:03 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:07:54.912 01:28:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.912 01:28:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.912 01:28:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.912 ************************************ 00:07:54.912 START TEST nvme_fio 00:07:54.912 ************************************ 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:07:54.912 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:07:54.912 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:07:54.912 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:54.912 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:54.913 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:54.913 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:07:54.913 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:07:54.913 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:07:54.913 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:54.913 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:07:55.174 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:55.174 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:07:55.436 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:07:55.436 01:28:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:07:55.436 01:28:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:55.436 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:07:55.436 fio-3.35 00:07:55.436 Starting 1 thread 00:08:02.092 00:08:02.092 test: (groupid=0, jobs=1): err= 0: pid=64165: Sun Nov 17 01:28:09 2024 00:08:02.092 read: IOPS=20.3k, BW=79.5MiB/s (83.3MB/s)(159MiB/2001msec) 00:08:02.092 slat (nsec): min=3313, max=70461, avg=5358.15, stdev=2829.78 00:08:02.092 clat (usec): min=362, max=9206, avg=3126.59, stdev=1208.02 00:08:02.092 lat (usec): min=367, max=9217, avg=3131.95, stdev=1209.49 00:08:02.092 clat percentiles (usec): 00:08:02.092 | 1.00th=[ 1696], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:02.092 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2835], 00:08:02.092 | 70.00th=[ 3064], 80.00th=[ 3687], 90.00th=[ 5080], 95.00th=[ 5997], 00:08:02.092 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8586], 00:08:02.092 | 99.99th=[ 8848] 00:08:02.092 bw ( KiB/s): min=72992, max=94800, per=99.40%, avg=80874.67, stdev=12094.73, samples=3 00:08:02.092 iops : min=18248, max=23700, avg=20218.67, stdev=3023.68, samples=3 00:08:02.092 write: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec); 0 zone resets 00:08:02.092 slat (nsec): min=3431, max=72031, avg=5471.87, stdev=2716.48 00:08:02.092 clat (usec): min=402, max=9089, avg=3149.04, stdev=1219.75 00:08:02.092 lat (usec): min=407, max=9101, avg=3154.52, stdev=1221.17 00:08:02.092 clat percentiles (usec): 00:08:02.092 | 1.00th=[ 1729], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:02.092 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:02.092 | 70.00th=[ 3097], 80.00th=[ 3720], 90.00th=[ 5211], 95.00th=[ 6063], 00:08:02.092 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8356], 99.95th=[ 8586], 00:08:02.092 | 99.99th=[ 8979] 00:08:02.092 bw ( KiB/s): min=73136, max=95192, per=99.82%, avg=81026.67, stdev=12294.12, samples=3 00:08:02.092 iops : min=18284, max=23798, avg=20256.67, stdev=3073.53, samples=3 00:08:02.092 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.07% 00:08:02.092 lat (msec) : 2=2.44%, 4=80.20%, 10=17.27% 00:08:02.092 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=607 00:08:02.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:02.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:02.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:02.092 issued rwts: total=40702,40606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:02.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:02.092 00:08:02.092 Run status group 0 (all jobs): 00:08:02.092 READ: bw=79.5MiB/s (83.3MB/s), 79.5MiB/s-79.5MiB/s (83.3MB/s-83.3MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:02.092 WRITE: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:08:02.092 ----------------------------------------------------- 00:08:02.092 Suppressions used: 00:08:02.092 count bytes template 00:08:02.092 1 32 /usr/src/fio/parse.c 00:08:02.092 1 8 libtcmalloc_minimal.so 00:08:02.092 ----------------------------------------------------- 00:08:02.092 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:02.092 01:28:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:02.092 01:28:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:02.093 01:28:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:02.093 01:28:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:02.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:02.093 fio-3.35 00:08:02.093 Starting 1 thread 00:08:08.670 00:08:08.670 test: (groupid=0, jobs=1): err= 0: pid=64226: Sun Nov 17 01:28:16 2024 00:08:08.670 read: IOPS=19.5k, BW=76.3MiB/s (80.0MB/s)(153MiB/2001msec) 00:08:08.670 slat (nsec): min=3298, max=73194, avg=5463.00, stdev=2954.71 00:08:08.670 clat (usec): min=498, max=11910, avg=3255.60, stdev=1256.20 00:08:08.670 lat (usec): min=503, max=11955, avg=3261.06, stdev=1257.68 00:08:08.670 clat percentiles (usec): 00:08:08.670 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:08.670 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2769], 60.00th=[ 2966], 00:08:08.670 | 70.00th=[ 3294], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 6063], 00:08:08.670 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8291], 99.95th=[ 8717], 00:08:08.670 | 99.99th=[11600] 00:08:08.670 bw ( KiB/s): min=76752, max=85608, per=100.00%, avg=79944.00, stdev=4918.36, samples=3 00:08:08.670 iops : min=19188, max=21402, avg=19986.00, stdev=1229.59, samples=3 00:08:08.670 write: IOPS=19.5k, BW=76.2MiB/s (79.9MB/s)(152MiB/2001msec); 0 zone resets 00:08:08.670 slat (usec): min=3, max=111, avg= 5.62, stdev= 3.15 00:08:08.670 clat (usec): min=480, max=11773, avg=3277.99, stdev=1259.91 00:08:08.670 lat (usec): min=485, max=11787, avg=3283.61, stdev=1261.41 00:08:08.670 clat percentiles (usec): 00:08:08.670 | 1.00th=[ 1975], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2376], 00:08:08.670 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2802], 60.00th=[ 2999], 00:08:08.670 | 70.00th=[ 3294], 80.00th=[ 4146], 90.00th=[ 5342], 95.00th=[ 6128], 00:08:08.670 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 9110], 00:08:08.670 | 99.99th=[11338] 00:08:08.671 bw ( KiB/s): min=77256, max=85336, per=100.00%, avg=79976.00, stdev=4642.07, samples=3 00:08:08.671 iops : min=19314, max=21334, avg=19994.00, stdev=1160.52, samples=3 00:08:08.671 lat (usec) : 500=0.01%, 750=0.02% 00:08:08.671 lat (msec) : 2=1.24%, 4=77.45%, 10=21.26%, 20=0.03% 00:08:08.671 cpu : usr=98.85%, sys=0.20%, ctx=4, majf=0, minf=607 00:08:08.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:08.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:08.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:08.671 issued rwts: total=39089,39028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:08.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:08.671 00:08:08.671 Run status group 0 (all jobs): 00:08:08.671 READ: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=153MiB (160MB), run=2001-2001msec 00:08:08.671 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=152MiB (160MB), run=2001-2001msec 00:08:08.671 ----------------------------------------------------- 00:08:08.671 Suppressions used: 00:08:08.671 count bytes template 00:08:08.671 1 32 /usr/src/fio/parse.c 00:08:08.671 1 8 libtcmalloc_minimal.so 00:08:08.671 ----------------------------------------------------- 00:08:08.671 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:08.671 01:28:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:08.671 01:28:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:08.671 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:08.671 fio-3.35 00:08:08.671 Starting 1 thread 00:08:15.254 00:08:15.254 test: (groupid=0, jobs=1): err= 0: pid=64287: Sun Nov 17 01:28:22 2024 00:08:15.254 read: IOPS=16.6k, BW=64.8MiB/s (68.0MB/s)(130MiB/2001msec) 00:08:15.254 slat (nsec): min=4255, max=86914, avg=5963.55, stdev=3365.85 00:08:15.254 clat (usec): min=245, max=10141, avg=3831.97, stdev=1357.94 00:08:15.254 lat (usec): min=250, max=10170, avg=3837.93, stdev=1359.21 00:08:15.254 clat percentiles (usec): 00:08:15.254 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:15.254 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3326], 60.00th=[ 3851], 00:08:15.254 | 70.00th=[ 4555], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6456], 00:08:15.254 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:15.254 | 99.99th=[10028] 00:08:15.254 bw ( KiB/s): min=64568, max=68032, per=99.10%, avg=65800.00, stdev=1936.45, samples=3 00:08:15.254 iops : min=16142, max=17008, avg=16450.00, stdev=484.11, samples=3 00:08:15.254 write: IOPS=16.6k, BW=65.0MiB/s (68.1MB/s)(130MiB/2001msec); 0 zone resets 00:08:15.254 slat (nsec): min=4330, max=87401, avg=6117.82, stdev=3380.81 00:08:15.254 clat (usec): min=226, max=10079, avg=3845.28, stdev=1346.60 00:08:15.254 lat (usec): min=232, max=10095, avg=3851.40, stdev=1347.87 00:08:15.254 clat percentiles (usec): 00:08:15.254 | 1.00th=[ 2057], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2704], 00:08:15.254 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3326], 60.00th=[ 3851], 00:08:15.254 | 70.00th=[ 4490], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6456], 00:08:15.254 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:15.254 | 99.99th=[10028] 00:08:15.254 bw ( KiB/s): min=63936, max=67752, per=98.62%, avg=65616.00, stdev=1948.44, samples=3 00:08:15.254 iops : min=15984, max=16938, avg=16404.00, stdev=487.11, samples=3 00:08:15.254 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:08:15.254 lat (msec) : 2=0.77%, 4=61.22%, 10=37.97%, 20=0.01% 00:08:15.254 cpu : usr=98.60%, sys=0.15%, ctx=22, majf=0, minf=607 00:08:15.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:15.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:15.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:15.254 issued rwts: total=33216,33284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:15.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:15.254 00:08:15.254 Run status group 0 (all jobs): 00:08:15.254 READ: bw=64.8MiB/s (68.0MB/s), 64.8MiB/s-64.8MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:08:15.254 WRITE: bw=65.0MiB/s (68.1MB/s), 65.0MiB/s-65.0MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:08:15.254 ----------------------------------------------------- 00:08:15.254 Suppressions used: 00:08:15.254 count bytes template 00:08:15.254 1 32 /usr/src/fio/parse.c 00:08:15.254 1 8 libtcmalloc_minimal.so 00:08:15.254 ----------------------------------------------------- 00:08:15.255 00:08:15.255 01:28:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:15.255 01:28:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:15.255 01:28:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:15.255 01:28:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:15.255 01:28:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:15.255 01:28:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:15.255 01:28:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:15.255 01:28:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:15.255 01:28:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:15.255 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:15.255 fio-3.35 00:08:15.255 Starting 1 thread 00:08:27.492 00:08:27.492 test: (groupid=0, jobs=1): err= 0: pid=64342: Sun Nov 17 01:28:34 2024 00:08:27.492 read: IOPS=24.9k, BW=97.4MiB/s (102MB/s)(195MiB/2001msec) 00:08:27.492 slat (nsec): min=4209, max=60294, avg=4821.78, stdev=1795.33 00:08:27.492 clat (usec): min=332, max=9199, avg=2559.81, stdev=644.97 00:08:27.492 lat (usec): min=339, max=9248, avg=2564.63, stdev=646.10 00:08:27.492 clat percentiles (usec): 00:08:27.492 | 1.00th=[ 1713], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:27.492 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:27.492 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2835], 95.00th=[ 3818], 00:08:27.492 | 99.00th=[ 5604], 99.50th=[ 6194], 99.90th=[ 7439], 99.95th=[ 7701], 00:08:27.492 | 99.99th=[ 9110] 00:08:27.492 bw ( KiB/s): min=97872, max=99328, per=98.74%, avg=98453.33, stdev=771.05, samples=3 00:08:27.492 iops : min=24468, max=24832, avg=24613.33, stdev=192.76, samples=3 00:08:27.492 write: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(194MiB/2001msec); 0 zone resets 00:08:27.492 slat (nsec): min=4284, max=80990, avg=5089.05, stdev=1849.27 00:08:27.492 clat (usec): min=354, max=9129, avg=2568.46, stdev=667.35 00:08:27.492 lat (usec): min=362, max=9146, avg=2573.55, stdev=668.46 00:08:27.492 clat percentiles (usec): 00:08:27.492 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:27.492 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:27.492 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2835], 95.00th=[ 3916], 00:08:27.492 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 7767], 00:08:27.492 | 99.99th=[ 8979] 00:08:27.492 bw ( KiB/s): min=97864, max=99048, per=99.32%, avg=98493.33, stdev=595.52, samples=3 00:08:27.492 iops : min=24466, max=24762, avg=24623.33, stdev=148.88, samples=3 00:08:27.492 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:08:27.492 lat (msec) : 2=2.60%, 4=92.69%, 10=4.67% 00:08:27.492 cpu : usr=99.40%, sys=0.00%, ctx=16, majf=0, minf=606 00:08:27.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.492 issued rwts: total=49879,49608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.492 00:08:27.492 Run status group 0 (all jobs): 00:08:27.492 READ: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=195MiB (204MB), run=2001-2001msec 00:08:27.492 WRITE: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=194MiB (203MB), run=2001-2001msec 00:08:27.492 ----------------------------------------------------- 00:08:27.492 Suppressions used: 00:08:27.492 count bytes template 00:08:27.492 1 32 /usr/src/fio/parse.c 00:08:27.492 1 8 libtcmalloc_minimal.so 00:08:27.492 ----------------------------------------------------- 00:08:27.492 00:08:27.492 01:28:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:27.492 01:28:34 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:27.492 00:08:27.492 real 0m31.392s 00:08:27.492 user 0m16.663s 00:08:27.492 sys 0m27.917s 00:08:27.492 01:28:34 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.492 01:28:34 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 ************************************ 00:08:27.492 END TEST nvme_fio 00:08:27.492 ************************************ 00:08:27.492 ************************************ 00:08:27.492 END TEST nvme 00:08:27.492 ************************************ 00:08:27.492 00:08:27.492 real 1m40.792s 00:08:27.492 user 3m38.263s 00:08:27.492 sys 0m38.391s 00:08:27.492 01:28:34 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.492 01:28:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 01:28:34 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:27.492 01:28:34 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:27.492 01:28:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.492 01:28:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.492 01:28:34 -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 ************************************ 00:08:27.492 START TEST nvme_scc 00:08:27.492 ************************************ 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:27.492 * Looking for test storage... 00:08:27.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.492 --rc genhtml_branch_coverage=1 00:08:27.492 --rc genhtml_function_coverage=1 00:08:27.492 --rc genhtml_legend=1 00:08:27.492 --rc geninfo_all_blocks=1 00:08:27.492 --rc geninfo_unexecuted_blocks=1 00:08:27.492 00:08:27.492 ' 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.492 --rc genhtml_branch_coverage=1 00:08:27.492 --rc genhtml_function_coverage=1 00:08:27.492 --rc genhtml_legend=1 00:08:27.492 --rc geninfo_all_blocks=1 00:08:27.492 --rc geninfo_unexecuted_blocks=1 00:08:27.492 00:08:27.492 ' 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.492 --rc genhtml_branch_coverage=1 00:08:27.492 --rc genhtml_function_coverage=1 00:08:27.492 --rc genhtml_legend=1 00:08:27.492 --rc geninfo_all_blocks=1 00:08:27.492 --rc geninfo_unexecuted_blocks=1 00:08:27.492 00:08:27.492 ' 00:08:27.492 01:28:34 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.492 --rc genhtml_branch_coverage=1 00:08:27.492 --rc genhtml_function_coverage=1 00:08:27.492 --rc genhtml_legend=1 00:08:27.492 --rc geninfo_all_blocks=1 00:08:27.492 --rc geninfo_unexecuted_blocks=1 00:08:27.492 00:08:27.492 ' 00:08:27.492 01:28:34 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:27.492 01:28:34 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:27.492 01:28:34 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:27.492 01:28:34 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:27.492 01:28:34 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.492 01:28:34 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.493 01:28:34 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.493 01:28:34 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.493 01:28:34 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.493 01:28:34 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:27.493 01:28:34 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:27.493 01:28:34 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:27.493 01:28:34 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.493 01:28:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:27.493 01:28:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:27.493 01:28:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:27.493 01:28:34 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:27.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.493 Waiting for block devices as requested 00:08:27.493 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.493 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.493 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.493 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.769 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:32.769 01:28:40 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:32.769 01:28:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:32.769 01:28:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:32.769 01:28:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:32.769 01:28:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.769 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.770 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.771 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:32.772 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:32.773 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:32.774 01:28:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:32.774 01:28:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:32.774 01:28:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:32.774 01:28:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.774 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.775 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.776 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.777 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:32.778 01:28:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:32.779 01:28:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:32.779 01:28:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:32.779 01:28:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:32.779 01:28:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:32.779 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.780 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:32.781 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.782 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.783 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.784 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:32.785 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:32.786 01:28:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:32.786 01:28:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:32.786 01:28:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:32.786 01:28:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:32.786 01:28:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.787 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.788 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.789 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:32.790 01:28:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:32.790 01:28:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:32.790 01:28:40 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:33.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.307 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.307 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.307 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.307 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.565 01:28:41 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:33.565 01:28:41 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:33.565 01:28:41 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.565 01:28:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:33.565 ************************************ 00:08:33.565 START TEST nvme_simple_copy 00:08:33.565 ************************************ 00:08:33.565 01:28:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:33.826 Initializing NVMe Controllers 00:08:33.826 Attaching to 0000:00:10.0 00:08:33.826 Controller supports SCC. Attached to 0000:00:10.0 00:08:33.826 Namespace ID: 1 size: 6GB 00:08:33.826 Initialization complete. 00:08:33.826 00:08:33.826 Controller QEMU NVMe Ctrl (12340 ) 00:08:33.826 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:33.826 Namespace Block Size:4096 00:08:33.826 Writing LBAs 0 to 63 with Random Data 00:08:33.826 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:33.826 LBAs matching Written Data: 64 00:08:33.826 00:08:33.826 real 0m0.253s 00:08:33.826 user 0m0.091s 00:08:33.826 sys 0m0.061s 00:08:33.826 01:28:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.826 01:28:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:33.826 ************************************ 00:08:33.826 END TEST nvme_simple_copy 00:08:33.826 ************************************ 00:08:33.826 00:08:33.826 real 0m7.525s 00:08:33.826 user 0m1.046s 00:08:33.826 sys 0m1.317s 00:08:33.826 01:28:42 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.826 01:28:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:33.826 ************************************ 00:08:33.826 END TEST nvme_scc 00:08:33.826 ************************************ 00:08:33.826 01:28:42 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:33.826 01:28:42 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:33.826 01:28:42 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:33.826 01:28:42 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:33.826 01:28:42 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:33.826 01:28:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.826 01:28:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.826 01:28:42 -- common/autotest_common.sh@10 -- # set +x 00:08:33.826 ************************************ 00:08:33.826 START TEST nvme_fdp 00:08:33.826 ************************************ 00:08:33.826 01:28:42 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:33.826 * Looking for test storage... 00:08:33.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.826 01:28:42 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.826 01:28:42 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.826 01:28:42 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.826 01:28:42 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.826 01:28:42 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.826 01:28:42 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.826 01:28:42 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.826 01:28:42 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:34.100 01:28:42 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.100 01:28:42 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.100 --rc genhtml_branch_coverage=1 00:08:34.100 --rc genhtml_function_coverage=1 00:08:34.100 --rc genhtml_legend=1 00:08:34.100 --rc geninfo_all_blocks=1 00:08:34.100 --rc geninfo_unexecuted_blocks=1 00:08:34.100 00:08:34.100 ' 00:08:34.100 01:28:42 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.100 --rc genhtml_branch_coverage=1 00:08:34.100 --rc genhtml_function_coverage=1 00:08:34.100 --rc genhtml_legend=1 00:08:34.100 --rc geninfo_all_blocks=1 00:08:34.100 --rc geninfo_unexecuted_blocks=1 00:08:34.100 00:08:34.100 ' 00:08:34.100 01:28:42 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.100 --rc genhtml_branch_coverage=1 00:08:34.100 --rc genhtml_function_coverage=1 00:08:34.100 --rc genhtml_legend=1 00:08:34.100 --rc geninfo_all_blocks=1 00:08:34.100 --rc geninfo_unexecuted_blocks=1 00:08:34.100 00:08:34.100 ' 00:08:34.100 01:28:42 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.100 --rc genhtml_branch_coverage=1 00:08:34.100 --rc genhtml_function_coverage=1 00:08:34.100 --rc genhtml_legend=1 00:08:34.100 --rc geninfo_all_blocks=1 00:08:34.100 --rc geninfo_unexecuted_blocks=1 00:08:34.100 00:08:34.100 ' 00:08:34.100 01:28:42 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:34.100 01:28:42 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:34.100 01:28:42 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:34.100 01:28:42 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:34.100 01:28:42 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.100 01:28:42 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.100 01:28:42 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.101 01:28:42 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.101 01:28:42 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.101 01:28:42 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:34.101 01:28:42 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:34.101 01:28:42 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:34.101 01:28:42 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.101 01:28:42 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:34.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.415 Waiting for block devices as requested 00:08:34.415 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.415 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.685 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.685 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.988 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:39.988 01:28:48 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:39.988 01:28:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:39.988 01:28:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:39.988 01:28:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:39.988 01:28:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:39.988 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:39.989 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:39.990 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.991 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:39.992 01:28:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:39.993 01:28:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:39.993 01:28:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:39.993 01:28:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:39.993 01:28:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:39.993 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.994 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.995 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.996 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:39.997 01:28:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:39.997 01:28:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:39.997 01:28:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:39.997 01:28:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.997 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.998 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:39.999 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.000 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.001 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:40.002 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.003 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:40.004 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:40.005 01:28:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:40.005 01:28:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:40.005 01:28:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:40.005 01:28:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.005 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.006 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.007 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:40.008 01:28:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:40.008 01:28:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:40.009 01:28:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:40.009 01:28:48 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:40.009 01:28:48 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.843 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.843 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.843 01:28:49 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:40.843 01:28:49 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:40.843 01:28:49 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.843 01:28:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:40.843 ************************************ 00:08:40.843 START TEST nvme_flexible_data_placement 00:08:40.843 ************************************ 00:08:40.843 01:28:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:41.103 Initializing NVMe Controllers 00:08:41.103 Attaching to 0000:00:13.0 00:08:41.103 Controller supports FDP Attached to 0000:00:13.0 00:08:41.103 Namespace ID: 1 Endurance Group ID: 1 00:08:41.103 Initialization complete. 00:08:41.103 00:08:41.103 ================================== 00:08:41.103 == FDP tests for Namespace: #01 == 00:08:41.103 ================================== 00:08:41.103 00:08:41.103 Get Feature: FDP: 00:08:41.103 ================= 00:08:41.103 Enabled: Yes 00:08:41.103 FDP configuration Index: 0 00:08:41.103 00:08:41.103 FDP configurations log page 00:08:41.103 =========================== 00:08:41.103 Number of FDP configurations: 1 00:08:41.103 Version: 0 00:08:41.103 Size: 112 00:08:41.103 FDP Configuration Descriptor: 0 00:08:41.103 Descriptor Size: 96 00:08:41.103 Reclaim Group Identifier format: 2 00:08:41.103 FDP Volatile Write Cache: Not Present 00:08:41.103 FDP Configuration: Valid 00:08:41.103 Vendor Specific Size: 0 00:08:41.103 Number of Reclaim Groups: 2 00:08:41.103 Number of Recalim Unit Handles: 8 00:08:41.103 Max Placement Identifiers: 128 00:08:41.103 Number of Namespaces Suppprted: 256 00:08:41.103 Reclaim unit Nominal Size: 6000000 bytes 00:08:41.103 Estimated Reclaim Unit Time Limit: Not Reported 00:08:41.103 RUH Desc #000: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #001: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #002: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #003: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #004: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #005: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #006: RUH Type: Initially Isolated 00:08:41.103 RUH Desc #007: RUH Type: Initially Isolated 00:08:41.103 00:08:41.103 FDP reclaim unit handle usage log page 00:08:41.103 ====================================== 00:08:41.103 Number of Reclaim Unit Handles: 8 00:08:41.103 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:41.103 RUH Usage Desc #001: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #002: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #003: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #004: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #005: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #006: RUH Attributes: Unused 00:08:41.103 RUH Usage Desc #007: RUH Attributes: Unused 00:08:41.103 00:08:41.103 FDP statistics log page 00:08:41.103 ======================= 00:08:41.103 Host bytes with metadata written: 1127034880 00:08:41.103 Media bytes with metadata written: 1127288832 00:08:41.103 Media bytes erased: 0 00:08:41.103 00:08:41.103 FDP Reclaim unit handle status 00:08:41.103 ============================== 00:08:41.103 Number of RUHS descriptors: 2 00:08:41.103 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004d2d 00:08:41.103 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:41.103 00:08:41.103 FDP write on placement id: 0 success 00:08:41.103 00:08:41.103 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:41.103 00:08:41.103 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:41.103 00:08:41.103 Get Feature: FDP Events for Placement handle: #0 00:08:41.103 ======================== 00:08:41.103 Number of FDP Events: 6 00:08:41.103 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:41.103 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:41.103 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:41.103 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:41.103 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:41.103 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:41.103 00:08:41.103 FDP events log page 00:08:41.103 =================== 00:08:41.103 Number of FDP events: 1 00:08:41.103 FDP Event #0: 00:08:41.103 Event Type: RU Not Written to Capacity 00:08:41.103 Placement Identifier: Valid 00:08:41.103 NSID: Valid 00:08:41.103 Location: Valid 00:08:41.103 Placement Identifier: 0 00:08:41.103 Event Timestamp: 6 00:08:41.103 Namespace Identifier: 1 00:08:41.103 Reclaim Group Identifier: 0 00:08:41.103 Reclaim Unit Handle Identifier: 0 00:08:41.103 00:08:41.103 FDP test passed 00:08:41.103 00:08:41.103 real 0m0.232s 00:08:41.103 user 0m0.078s 00:08:41.103 sys 0m0.053s 00:08:41.103 01:28:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.103 01:28:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 ************************************ 00:08:41.103 END TEST nvme_flexible_data_placement 00:08:41.103 ************************************ 00:08:41.365 00:08:41.365 real 0m7.404s 00:08:41.365 user 0m0.961s 00:08:41.365 sys 0m1.350s 00:08:41.365 01:28:49 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.365 01:28:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:41.365 ************************************ 00:08:41.365 END TEST nvme_fdp 00:08:41.365 ************************************ 00:08:41.365 01:28:49 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:41.365 01:28:49 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:41.365 01:28:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.365 01:28:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.365 01:28:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.365 ************************************ 00:08:41.365 START TEST nvme_rpc 00:08:41.365 ************************************ 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:41.365 * Looking for test storage... 00:08:41.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.365 01:28:49 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.365 --rc genhtml_branch_coverage=1 00:08:41.365 --rc genhtml_function_coverage=1 00:08:41.365 --rc genhtml_legend=1 00:08:41.365 --rc geninfo_all_blocks=1 00:08:41.365 --rc geninfo_unexecuted_blocks=1 00:08:41.365 00:08:41.365 ' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.365 --rc genhtml_branch_coverage=1 00:08:41.365 --rc genhtml_function_coverage=1 00:08:41.365 --rc genhtml_legend=1 00:08:41.365 --rc geninfo_all_blocks=1 00:08:41.365 --rc geninfo_unexecuted_blocks=1 00:08:41.365 00:08:41.365 ' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.365 --rc genhtml_branch_coverage=1 00:08:41.365 --rc genhtml_function_coverage=1 00:08:41.365 --rc genhtml_legend=1 00:08:41.365 --rc geninfo_all_blocks=1 00:08:41.365 --rc geninfo_unexecuted_blocks=1 00:08:41.365 00:08:41.365 ' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.365 --rc genhtml_branch_coverage=1 00:08:41.365 --rc genhtml_function_coverage=1 00:08:41.365 --rc genhtml_legend=1 00:08:41.365 --rc geninfo_all_blocks=1 00:08:41.365 --rc geninfo_unexecuted_blocks=1 00:08:41.365 00:08:41.365 ' 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65703 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:41.365 01:28:49 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65703 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65703 ']' 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.365 01:28:49 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.626 [2024-11-17 01:28:49.863979] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:41.626 [2024-11-17 01:28:49.864097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65703 ] 00:08:41.626 [2024-11-17 01:28:50.022930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.887 [2024-11-17 01:28:50.123570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.887 [2024-11-17 01:28:50.123614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.458 01:28:50 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.458 01:28:50 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:42.458 01:28:50 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:42.719 Nvme0n1 00:08:42.719 01:28:50 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:42.719 01:28:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:42.719 request: 00:08:42.719 { 00:08:42.719 "bdev_name": "Nvme0n1", 00:08:42.719 "filename": "non_existing_file", 00:08:42.719 "method": "bdev_nvme_apply_firmware", 00:08:42.719 "req_id": 1 00:08:42.719 } 00:08:42.719 Got JSON-RPC error response 00:08:42.719 response: 00:08:42.719 { 00:08:42.719 "code": -32603, 00:08:42.719 "message": "open file failed." 00:08:42.719 } 00:08:42.719 01:28:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:42.719 01:28:51 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:42.719 01:28:51 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:42.979 01:28:51 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:42.979 01:28:51 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65703 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65703 ']' 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65703 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65703 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65703' 00:08:42.979 killing process with pid 65703 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65703 00:08:42.979 01:28:51 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65703 00:08:44.364 00:08:44.364 real 0m3.194s 00:08:44.364 user 0m6.078s 00:08:44.364 sys 0m0.478s 00:08:44.364 01:28:52 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.364 01:28:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.364 ************************************ 00:08:44.364 END TEST nvme_rpc 00:08:44.364 ************************************ 00:08:44.364 01:28:52 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:44.364 01:28:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.364 01:28:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.364 01:28:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.625 ************************************ 00:08:44.625 START TEST nvme_rpc_timeouts 00:08:44.625 ************************************ 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:44.625 * Looking for test storage... 00:08:44.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.625 01:28:52 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.625 --rc genhtml_branch_coverage=1 00:08:44.625 --rc genhtml_function_coverage=1 00:08:44.625 --rc genhtml_legend=1 00:08:44.625 --rc geninfo_all_blocks=1 00:08:44.625 --rc geninfo_unexecuted_blocks=1 00:08:44.625 00:08:44.625 ' 00:08:44.625 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.626 --rc genhtml_branch_coverage=1 00:08:44.626 --rc genhtml_function_coverage=1 00:08:44.626 --rc genhtml_legend=1 00:08:44.626 --rc geninfo_all_blocks=1 00:08:44.626 --rc geninfo_unexecuted_blocks=1 00:08:44.626 00:08:44.626 ' 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.626 --rc genhtml_branch_coverage=1 00:08:44.626 --rc genhtml_function_coverage=1 00:08:44.626 --rc genhtml_legend=1 00:08:44.626 --rc geninfo_all_blocks=1 00:08:44.626 --rc geninfo_unexecuted_blocks=1 00:08:44.626 00:08:44.626 ' 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.626 --rc genhtml_branch_coverage=1 00:08:44.626 --rc genhtml_function_coverage=1 00:08:44.626 --rc genhtml_legend=1 00:08:44.626 --rc geninfo_all_blocks=1 00:08:44.626 --rc geninfo_unexecuted_blocks=1 00:08:44.626 00:08:44.626 ' 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65768 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65768 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65800 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65800 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65800 ']' 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.626 01:28:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:44.626 01:28:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:44.626 [2024-11-17 01:28:53.040300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:44.626 [2024-11-17 01:28:53.040418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65800 ] 00:08:44.887 [2024-11-17 01:28:53.198088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.887 [2024-11-17 01:28:53.294599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.887 [2024-11-17 01:28:53.294700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.459 Checking default timeout settings: 00:08:45.459 01:28:53 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.459 01:28:53 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:08:45.459 01:28:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:08:45.459 01:28:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:46.031 Making settings changes with rpc: 00:08:46.031 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:08:46.031 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:08:46.031 Check default vs. modified settings: 00:08:46.031 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:08:46.031 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:46.293 Setting action_on_timeout is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 Setting timeout_us is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:46.293 Setting timeout_admin_us is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65768 /tmp/settings_modified_65768 00:08:46.293 01:28:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65800 00:08:46.293 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65800 ']' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65800 00:08:46.293 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:08:46.293 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.293 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65800 00:08:46.554 killing process with pid 65800 00:08:46.554 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.554 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.554 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65800' 00:08:46.554 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65800 00:08:46.554 01:28:54 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65800 00:08:47.938 RPC TIMEOUT SETTING TEST PASSED. 00:08:47.938 01:28:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:08:47.938 00:08:47.938 real 0m3.516s 00:08:47.938 user 0m6.802s 00:08:47.938 sys 0m0.486s 00:08:47.938 01:28:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.938 01:28:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:47.938 ************************************ 00:08:47.938 END TEST nvme_rpc_timeouts 00:08:47.938 ************************************ 00:08:47.938 01:28:56 -- spdk/autotest.sh@239 -- # uname -s 00:08:47.938 01:28:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:08:47.939 01:28:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:47.939 01:28:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.939 01:28:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.939 01:28:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.939 ************************************ 00:08:47.939 START TEST sw_hotplug 00:08:47.939 ************************************ 00:08:47.939 01:28:56 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:48.200 * Looking for test storage... 00:08:48.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.200 01:28:56 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.200 --rc genhtml_branch_coverage=1 00:08:48.200 --rc genhtml_function_coverage=1 00:08:48.200 --rc genhtml_legend=1 00:08:48.200 --rc geninfo_all_blocks=1 00:08:48.200 --rc geninfo_unexecuted_blocks=1 00:08:48.200 00:08:48.200 ' 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.200 --rc genhtml_branch_coverage=1 00:08:48.200 --rc genhtml_function_coverage=1 00:08:48.200 --rc genhtml_legend=1 00:08:48.200 --rc geninfo_all_blocks=1 00:08:48.200 --rc geninfo_unexecuted_blocks=1 00:08:48.200 00:08:48.200 ' 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.200 --rc genhtml_branch_coverage=1 00:08:48.200 --rc genhtml_function_coverage=1 00:08:48.200 --rc genhtml_legend=1 00:08:48.200 --rc geninfo_all_blocks=1 00:08:48.200 --rc geninfo_unexecuted_blocks=1 00:08:48.200 00:08:48.200 ' 00:08:48.200 01:28:56 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.200 --rc genhtml_branch_coverage=1 00:08:48.200 --rc genhtml_function_coverage=1 00:08:48.200 --rc genhtml_legend=1 00:08:48.200 --rc geninfo_all_blocks=1 00:08:48.200 --rc geninfo_unexecuted_blocks=1 00:08:48.200 00:08:48.200 ' 00:08:48.200 01:28:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:48.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.722 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:48.722 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:48.722 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:48.722 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:48.722 01:28:56 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:08:48.722 01:28:56 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:08:48.722 01:28:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:08:48.722 01:28:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@233 -- # local class 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:08:48.722 01:28:56 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:08:48.722 01:28:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:08:48.723 01:28:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.723 01:28:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:08:48.723 01:28:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:08:48.723 01:28:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:48.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.244 Waiting for block devices as requested 00:08:49.244 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.244 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.244 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.505 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.779 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:54.779 01:29:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:08:54.779 01:29:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.779 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:08:54.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.779 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:08:55.037 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:08:55.295 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.295 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.295 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:08:55.295 01:29:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66651 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:08:55.553 01:29:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:08:55.553 01:29:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:08:55.553 01:29:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:08:55.553 01:29:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:08:55.553 01:29:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:08:55.553 01:29:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:08:55.553 Initializing NVMe Controllers 00:08:55.553 Attaching to 0000:00:10.0 00:08:55.553 Attaching to 0000:00:11.0 00:08:55.553 Attached to 0000:00:10.0 00:08:55.553 Attached to 0000:00:11.0 00:08:55.553 Initialization complete. Starting I/O... 00:08:55.811 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:08:55.811 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:08:55.811 00:08:56.745 QEMU NVMe Ctrl (12340 ): 2514 I/Os completed (+2514) 00:08:56.745 QEMU NVMe Ctrl (12341 ): 2568 I/Os completed (+2568) 00:08:56.745 00:08:57.679 QEMU NVMe Ctrl (12340 ): 5736 I/Os completed (+3222) 00:08:57.679 QEMU NVMe Ctrl (12341 ): 5797 I/Os completed (+3229) 00:08:57.679 00:08:58.612 QEMU NVMe Ctrl (12340 ): 9492 I/Os completed (+3756) 00:08:58.612 QEMU NVMe Ctrl (12341 ): 9528 I/Os completed (+3731) 00:08:58.612 00:08:59.985 QEMU NVMe Ctrl (12340 ): 12541 I/Os completed (+3049) 00:08:59.985 QEMU NVMe Ctrl (12341 ): 12532 I/Os completed (+3004) 00:08:59.985 00:09:00.919 QEMU NVMe Ctrl (12340 ): 15510 I/Os completed (+2969) 00:09:00.919 QEMU NVMe Ctrl (12341 ): 15506 I/Os completed (+2974) 00:09:00.919 00:09:01.485 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:01.485 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:01.485 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:01.485 [2024-11-17 01:29:09.826305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:01.485 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:01.486 [2024-11-17 01:29:09.827478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.827527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.827555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.827576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:01.486 [2024-11-17 01:29:09.829522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.829564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.829578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.829592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:01.486 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:01.486 [2024-11-17 01:29:09.850060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:01.486 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:01.486 [2024-11-17 01:29:09.851131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.851170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.851191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.851206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:01.486 [2024-11-17 01:29:09.852895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.852929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.852945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 [2024-11-17 01:29:09.852958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:01.486 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:01.486 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:01.486 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:01.486 EAL: Scan for (pci) bus failed. 00:09:01.744 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:01.744 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:01.744 01:29:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:01.744 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:01.744 Attaching to 0000:00:10.0 00:09:01.744 Attached to 0000:00:10.0 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:01.744 01:29:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:01.744 Attaching to 0000:00:11.0 00:09:01.744 Attached to 0000:00:11.0 00:09:02.684 QEMU NVMe Ctrl (12340 ): 3433 I/Os completed (+3433) 00:09:02.684 QEMU NVMe Ctrl (12341 ): 3133 I/Os completed (+3133) 00:09:02.684 00:09:03.627 QEMU NVMe Ctrl (12340 ): 7476 I/Os completed (+4043) 00:09:03.627 QEMU NVMe Ctrl (12341 ): 7177 I/Os completed (+4044) 00:09:03.627 00:09:04.570 QEMU NVMe Ctrl (12340 ): 11151 I/Os completed (+3675) 00:09:04.570 QEMU NVMe Ctrl (12341 ): 10866 I/Os completed (+3689) 00:09:04.570 00:09:05.962 QEMU NVMe Ctrl (12340 ): 14014 I/Os completed (+2863) 00:09:05.962 QEMU NVMe Ctrl (12341 ): 13733 I/Os completed (+2867) 00:09:05.962 00:09:06.906 QEMU NVMe Ctrl (12340 ): 16836 I/Os completed (+2822) 00:09:06.906 QEMU NVMe Ctrl (12341 ): 16558 I/Os completed (+2825) 00:09:06.906 00:09:07.847 QEMU NVMe Ctrl (12340 ): 19771 I/Os completed (+2935) 00:09:07.847 QEMU NVMe Ctrl (12341 ): 19556 I/Os completed (+2998) 00:09:07.848 00:09:08.789 QEMU NVMe Ctrl (12340 ): 22491 I/Os completed (+2720) 00:09:08.789 QEMU NVMe Ctrl (12341 ): 22279 I/Os completed (+2723) 00:09:08.789 00:09:09.729 QEMU NVMe Ctrl (12340 ): 25275 I/Os completed (+2784) 00:09:09.729 QEMU NVMe Ctrl (12341 ): 25059 I/Os completed (+2780) 00:09:09.729 00:09:10.669 QEMU NVMe Ctrl (12340 ): 28035 I/Os completed (+2760) 00:09:10.669 QEMU NVMe Ctrl (12341 ): 27819 I/Os completed (+2760) 00:09:10.669 00:09:11.626 QEMU NVMe Ctrl (12340 ): 30860 I/Os completed (+2825) 00:09:11.626 QEMU NVMe Ctrl (12341 ): 30643 I/Os completed (+2824) 00:09:11.626 00:09:12.567 QEMU NVMe Ctrl (12340 ): 33656 I/Os completed (+2796) 00:09:12.567 QEMU NVMe Ctrl (12341 ): 33435 I/Os completed (+2792) 00:09:12.567 00:09:13.951 QEMU NVMe Ctrl (12340 ): 36384 I/Os completed (+2728) 00:09:13.951 QEMU NVMe Ctrl (12341 ): 36167 I/Os completed (+2732) 00:09:13.951 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:13.951 [2024-11-17 01:29:22.136139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:13.951 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:13.951 [2024-11-17 01:29:22.137523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.137593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.137612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.137631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:13.951 [2024-11-17 01:29:22.139826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.139888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.139903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.139918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:13.951 [2024-11-17 01:29:22.161586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:13.951 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:13.951 [2024-11-17 01:29:22.162785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.162855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.162878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.162895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:13.951 [2024-11-17 01:29:22.164810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.164858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.164876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 [2024-11-17 01:29:22.164892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:13.951 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:13.951 EAL: Scan for (pci) bus failed. 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:13.951 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:13.951 Attaching to 0000:00:10.0 00:09:13.951 Attached to 0000:00:10.0 00:09:14.212 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:14.212 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:14.212 01:29:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:14.212 Attaching to 0000:00:11.0 00:09:14.212 Attached to 0000:00:11.0 00:09:14.784 QEMU NVMe Ctrl (12340 ): 1600 I/Os completed (+1600) 00:09:14.784 QEMU NVMe Ctrl (12341 ): 1401 I/Os completed (+1401) 00:09:14.784 00:09:15.728 QEMU NVMe Ctrl (12340 ): 4280 I/Os completed (+2680) 00:09:15.728 QEMU NVMe Ctrl (12341 ): 4084 I/Os completed (+2683) 00:09:15.728 00:09:16.671 QEMU NVMe Ctrl (12340 ): 7543 I/Os completed (+3263) 00:09:16.671 QEMU NVMe Ctrl (12341 ): 7347 I/Os completed (+3263) 00:09:16.671 00:09:17.615 QEMU NVMe Ctrl (12340 ): 10341 I/Os completed (+2798) 00:09:17.615 QEMU NVMe Ctrl (12341 ): 10191 I/Os completed (+2844) 00:09:17.615 00:09:18.992 QEMU NVMe Ctrl (12340 ): 13881 I/Os completed (+3540) 00:09:18.992 QEMU NVMe Ctrl (12341 ): 13731 I/Os completed (+3540) 00:09:18.992 00:09:19.565 QEMU NVMe Ctrl (12340 ): 16897 I/Os completed (+3016) 00:09:19.565 QEMU NVMe Ctrl (12341 ): 16747 I/Os completed (+3016) 00:09:19.565 00:09:20.948 QEMU NVMe Ctrl (12340 ): 19345 I/Os completed (+2448) 00:09:20.948 QEMU NVMe Ctrl (12341 ): 19176 I/Os completed (+2429) 00:09:20.948 00:09:21.880 QEMU NVMe Ctrl (12340 ): 22919 I/Os completed (+3574) 00:09:21.880 QEMU NVMe Ctrl (12341 ): 22748 I/Os completed (+3572) 00:09:21.880 00:09:22.815 QEMU NVMe Ctrl (12340 ): 26775 I/Os completed (+3856) 00:09:22.815 QEMU NVMe Ctrl (12341 ): 26635 I/Os completed (+3887) 00:09:22.815 00:09:23.750 QEMU NVMe Ctrl (12340 ): 30623 I/Os completed (+3848) 00:09:23.750 QEMU NVMe Ctrl (12341 ): 30483 I/Os completed (+3848) 00:09:23.750 00:09:24.754 QEMU NVMe Ctrl (12340 ): 34503 I/Os completed (+3880) 00:09:24.754 QEMU NVMe Ctrl (12341 ): 34363 I/Os completed (+3880) 00:09:24.754 00:09:25.688 QEMU NVMe Ctrl (12340 ): 38367 I/Os completed (+3864) 00:09:25.688 QEMU NVMe Ctrl (12341 ): 38227 I/Os completed (+3864) 00:09:25.688 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:26.255 [2024-11-17 01:29:34.451731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:26.255 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:26.255 [2024-11-17 01:29:34.452741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.452779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.452805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.452820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:26.255 [2024-11-17 01:29:34.454562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.454600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.454613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.454629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:26.255 [2024-11-17 01:29:34.473244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:26.255 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:26.255 [2024-11-17 01:29:34.474121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.474159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.474174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.474186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:26.255 [2024-11-17 01:29:34.475592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.475622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.475638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 [2024-11-17 01:29:34.475659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:26.255 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:26.255 EAL: Scan for (pci) bus failed. 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:26.255 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:26.255 Attaching to 0000:00:10.0 00:09:26.255 Attached to 0000:00:10.0 00:09:26.514 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:26.514 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:26.514 01:29:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:26.514 Attaching to 0000:00:11.0 00:09:26.514 Attached to 0000:00:11.0 00:09:26.514 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:26.514 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:26.514 [2024-11-17 01:29:34.757032] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:38.748 01:29:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:38.748 01:29:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:38.748 01:29:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.93 00:09:38.748 01:29:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.93 00:09:38.748 01:29:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:09:38.748 01:29:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:09:38.748 01:29:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:09:38.748 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 01:29:46 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66651 00:09:45.340 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66651) - No such process 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66651 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67202 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67202 00:09:45.340 01:29:52 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67202 ']' 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.340 01:29:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:45.340 [2024-11-17 01:29:52.853507] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:45.340 [2024-11-17 01:29:52.853655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67202 ] 00:09:45.340 [2024-11-17 01:29:53.018824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.340 [2024-11-17 01:29:53.148996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:45.602 01:29:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:45.602 01:29:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:52.186 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:52.186 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.186 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.186 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:52.187 01:29:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.187 01:29:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 01:29:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:09:52.187 01:29:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:09:52.187 [2024-11-17 01:29:59.925165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:52.187 [2024-11-17 01:29:59.926363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:29:59.926392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:29:59.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:29:59.926422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:29:59.926429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:29:59.926437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:29:59.926444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:29:59.926451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:29:59.926458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:29:59.926468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:29:59.926474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:29:59.926482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:30:00.325185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:52.187 [2024-11-17 01:30:00.326409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:30:00.326436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:30:00.326447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:30:00.326463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:30:00.326471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:30:00.326478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:30:00.326486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:30:00.326493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:30:00.326500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 [2024-11-17 01:30:00.326507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:52.187 [2024-11-17 01:30:00.326515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.187 [2024-11-17 01:30:00.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:52.187 01:30:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.187 01:30:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 01:30:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:52.187 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:52.448 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:52.448 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:52.448 01:30:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.675 [2024-11-17 01:30:12.725378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:04.675 [2024-11-17 01:30:12.726758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:12.726799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:12.726810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 [2024-11-17 01:30:12.726827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:12.726834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:12.726842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 [2024-11-17 01:30:12.726849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:12.726858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:12.726865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 [2024-11-17 01:30:12.726874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:12.726880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:12.726888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:04.675 01:30:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:04.675 01:30:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:04.675 [2024-11-17 01:30:13.125378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:04.675 [2024-11-17 01:30:13.126516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:13.126542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:13.126554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 [2024-11-17 01:30:13.126567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:13.126576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.675 [2024-11-17 01:30:13.126583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.675 [2024-11-17 01:30:13.126591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.675 [2024-11-17 01:30:13.126597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.676 [2024-11-17 01:30:13.126605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.676 [2024-11-17 01:30:13.126612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.676 [2024-11-17 01:30:13.126619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.676 [2024-11-17 01:30:13.126626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:04.937 01:30:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.937 01:30:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:04.937 01:30:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:04.937 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.198 01:30:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.439 [2024-11-17 01:30:25.625600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:17.439 [2024-11-17 01:30:25.627028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.439 [2024-11-17 01:30:25.627133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.439 [2024-11-17 01:30:25.627197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.439 [2024-11-17 01:30:25.627254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.439 [2024-11-17 01:30:25.627273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.439 [2024-11-17 01:30:25.627329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.439 [2024-11-17 01:30:25.627357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.439 [2024-11-17 01:30:25.627374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.439 [2024-11-17 01:30:25.627434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.439 [2024-11-17 01:30:25.627493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.439 [2024-11-17 01:30:25.627512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.439 [2024-11-17 01:30:25.627561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.439 01:30:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:17.439 01:30:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:17.701 [2024-11-17 01:30:26.025597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:17.701 [2024-11-17 01:30:26.026764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.701 [2024-11-17 01:30:26.026805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.701 [2024-11-17 01:30:26.026817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.701 [2024-11-17 01:30:26.026830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.701 [2024-11-17 01:30:26.026838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.701 [2024-11-17 01:30:26.026845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.701 [2024-11-17 01:30:26.026854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.701 [2024-11-17 01:30:26.026860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.701 [2024-11-17 01:30:26.026870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.701 [2024-11-17 01:30:26.026876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.701 [2024-11-17 01:30:26.026884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:17.701 [2024-11-17 01:30:26.026890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.961 01:30:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.961 01:30:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.961 01:30:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.961 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.222 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.222 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.222 01:30:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:10:30.460 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:30.460 01:30:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:30.460 01:30:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.050 01:30:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.050 01:30:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.050 01:30:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:37.050 01:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.050 [2024-11-17 01:30:44.596991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:37.050 [2024-11-17 01:30:44.597890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.050 [2024-11-17 01:30:44.597921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.050 [2024-11-17 01:30:44.597932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.050 [2024-11-17 01:30:44.597947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.050 [2024-11-17 01:30:44.597955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.050 [2024-11-17 01:30:44.597963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.050 [2024-11-17 01:30:44.597970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.050 [2024-11-17 01:30:44.597980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.050 [2024-11-17 01:30:44.597987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.050 [2024-11-17 01:30:44.597995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.050 [2024-11-17 01:30:44.598002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.050 [2024-11-17 01:30:44.598011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.050 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.050 01:30:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.050 01:30:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.050 [2024-11-17 01:30:45.096988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:37.051 [2024-11-17 01:30:45.097845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.051 [2024-11-17 01:30:45.097870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.051 [2024-11-17 01:30:45.097881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.051 [2024-11-17 01:30:45.097892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.051 [2024-11-17 01:30:45.097901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.051 [2024-11-17 01:30:45.097908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.051 [2024-11-17 01:30:45.097916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.051 [2024-11-17 01:30:45.097923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.051 [2024-11-17 01:30:45.097931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.051 [2024-11-17 01:30:45.097938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.051 [2024-11-17 01:30:45.097945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.051 [2024-11-17 01:30:45.097952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.051 01:30:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.051 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:37.051 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.311 01:30:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.311 01:30:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.311 01:30:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.311 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.571 01:30:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.812 01:30:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.812 01:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.812 [2024-11-17 01:30:57.997209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.812 [2024-11-17 01:30:57.998092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.812 [2024-11-17 01:30:57.998121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.812 [2024-11-17 01:30:57.998132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.812 [2024-11-17 01:30:57.998149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.812 [2024-11-17 01:30:57.998156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.812 [2024-11-17 01:30:57.998165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.812 [2024-11-17 01:30:57.998172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.812 [2024-11-17 01:30:57.998180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.812 [2024-11-17 01:30:57.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.812 [2024-11-17 01:30:57.998196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.813 [2024-11-17 01:30:57.998202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.813 [2024-11-17 01:30:57.998211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.074 [2024-11-17 01:30:58.397214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.074 [2024-11-17 01:30:58.398304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.074 [2024-11-17 01:30:58.398334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.074 [2024-11-17 01:30:58.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.074 [2024-11-17 01:30:58.398358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.074 [2024-11-17 01:30:58.398370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.074 [2024-11-17 01:30:58.398377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.074 [2024-11-17 01:30:58.398386] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.074 [2024-11-17 01:30:58.398392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.074 [2024-11-17 01:30:58.398400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.074 [2024-11-17 01:30:58.398407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.074 [2024-11-17 01:30:58.398415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.074 [2024-11-17 01:30:58.398421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.074 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.074 01:30:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.074 01:30:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.074 01:30:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.335 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.335 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.335 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.335 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.336 01:30:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.591 01:31:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:02.591 01:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.591 [2024-11-17 01:31:10.897444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.591 [2024-11-17 01:31:10.898440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.591 [2024-11-17 01:31:10.898540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.591 [2024-11-17 01:31:10.898598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.591 [2024-11-17 01:31:10.898660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.591 [2024-11-17 01:31:10.898679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.591 [2024-11-17 01:31:10.898704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.591 [2024-11-17 01:31:10.898824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.591 [2024-11-17 01:31:10.898846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.591 [2024-11-17 01:31:10.898899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.591 [2024-11-17 01:31:10.898956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.591 [2024-11-17 01:31:10.899000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.591 [2024-11-17 01:31:10.899024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.852 [2024-11-17 01:31:11.297435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.852 [2024-11-17 01:31:11.298585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.852 [2024-11-17 01:31:11.298678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.852 [2024-11-17 01:31:11.298743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.852 [2024-11-17 01:31:11.298831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.852 [2024-11-17 01:31:11.298855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.852 [2024-11-17 01:31:11.298878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.852 [2024-11-17 01:31:11.298902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.852 [2024-11-17 01:31:11.298918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.852 [2024-11-17 01:31:11.298941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.852 [2024-11-17 01:31:11.299026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.852 [2024-11-17 01:31:11.299049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.852 [2024-11-17 01:31:11.299072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.113 01:31:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.113 01:31:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.113 01:31:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.113 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.395 01:31:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:11:15.687 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:15.687 01:31:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67202 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67202 ']' 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67202 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67202 00:11:15.687 killing process with pid 67202 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67202' 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67202 00:11:15.687 01:31:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67202 00:11:16.633 01:31:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:16.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.469 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.469 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.469 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.469 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.469 00:11:17.469 real 2m29.454s 00:11:17.469 user 1m51.401s 00:11:17.469 sys 0m16.709s 00:11:17.469 01:31:25 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.469 01:31:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 ************************************ 00:11:17.469 END TEST sw_hotplug 00:11:17.469 ************************************ 00:11:17.469 01:31:25 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:17.469 01:31:25 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:17.469 01:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.469 01:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.469 01:31:25 -- common/autotest_common.sh@10 -- # set +x 00:11:17.469 ************************************ 00:11:17.469 START TEST nvme_xnvme 00:11:17.469 ************************************ 00:11:17.469 01:31:25 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:17.731 * Looking for test storage... 00:11:17.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:17.731 01:31:25 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.731 01:31:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.731 01:31:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.731 01:31:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.731 01:31:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:17.731 01:31:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.731 01:31:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.731 --rc genhtml_branch_coverage=1 00:11:17.731 --rc genhtml_function_coverage=1 00:11:17.731 --rc genhtml_legend=1 00:11:17.732 --rc geninfo_all_blocks=1 00:11:17.732 --rc geninfo_unexecuted_blocks=1 00:11:17.732 00:11:17.732 ' 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.732 --rc genhtml_branch_coverage=1 00:11:17.732 --rc genhtml_function_coverage=1 00:11:17.732 --rc genhtml_legend=1 00:11:17.732 --rc geninfo_all_blocks=1 00:11:17.732 --rc geninfo_unexecuted_blocks=1 00:11:17.732 00:11:17.732 ' 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.732 --rc genhtml_branch_coverage=1 00:11:17.732 --rc genhtml_function_coverage=1 00:11:17.732 --rc genhtml_legend=1 00:11:17.732 --rc geninfo_all_blocks=1 00:11:17.732 --rc geninfo_unexecuted_blocks=1 00:11:17.732 00:11:17.732 ' 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.732 --rc genhtml_branch_coverage=1 00:11:17.732 --rc genhtml_function_coverage=1 00:11:17.732 --rc genhtml_legend=1 00:11:17.732 --rc geninfo_all_blocks=1 00:11:17.732 --rc geninfo_unexecuted_blocks=1 00:11:17.732 00:11:17.732 ' 00:11:17.732 01:31:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.732 01:31:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.732 01:31:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.732 01:31:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.732 01:31:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.732 01:31:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.732 01:31:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.732 01:31:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.732 01:31:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:17.732 01:31:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.732 01:31:26 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.732 01:31:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.732 ************************************ 00:11:17.732 START TEST xnvme_to_malloc_dd_copy 00:11:17.732 ************************************ 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:17.732 01:31:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:17.732 { 00:11:17.732 "subsystems": [ 00:11:17.732 { 00:11:17.732 "subsystem": "bdev", 00:11:17.732 "config": [ 00:11:17.732 { 00:11:17.732 "params": { 00:11:17.732 "block_size": 512, 00:11:17.732 "num_blocks": 2097152, 00:11:17.732 "name": "malloc0" 00:11:17.732 }, 00:11:17.732 "method": "bdev_malloc_create" 00:11:17.732 }, 00:11:17.732 { 00:11:17.732 "params": { 00:11:17.732 "io_mechanism": "libaio", 00:11:17.732 "filename": "/dev/nullb0", 00:11:17.732 "name": "null0" 00:11:17.732 }, 00:11:17.732 "method": "bdev_xnvme_create" 00:11:17.732 }, 00:11:17.732 { 00:11:17.732 "method": "bdev_wait_for_examine" 00:11:17.732 } 00:11:17.732 ] 00:11:17.732 } 00:11:17.732 ] 00:11:17.732 } 00:11:17.994 [2024-11-17 01:31:26.196030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:17.994 [2024-11-17 01:31:26.196431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:11:17.994 [2024-11-17 01:31:26.363769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.255 [2024-11-17 01:31:26.481543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.172  [2024-11-17T01:31:29.574Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-17T01:31:30.960Z] Copying: 461/1024 [MB] (235 MBps) [2024-11-17T01:31:31.531Z] Copying: 760/1024 [MB] (299 MBps) [2024-11-17T01:31:33.446Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:11:24.987 00:11:24.987 01:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:24.987 01:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:24.987 01:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:24.987 01:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.987 { 00:11:24.987 "subsystems": [ 00:11:24.987 { 00:11:24.987 "subsystem": "bdev", 00:11:24.987 "config": [ 00:11:24.987 { 00:11:24.987 "params": { 00:11:24.987 "block_size": 512, 00:11:24.987 "num_blocks": 2097152, 00:11:24.987 "name": "malloc0" 00:11:24.987 }, 00:11:24.987 "method": "bdev_malloc_create" 00:11:24.987 }, 00:11:24.987 { 00:11:24.987 "params": { 00:11:24.987 "io_mechanism": "libaio", 00:11:24.987 "filename": "/dev/nullb0", 00:11:24.987 "name": "null0" 00:11:24.987 }, 00:11:24.987 "method": "bdev_xnvme_create" 00:11:24.987 }, 00:11:24.987 { 00:11:24.987 "method": "bdev_wait_for_examine" 00:11:24.987 } 00:11:24.987 ] 00:11:24.987 } 00:11:24.987 ] 00:11:24.987 } 00:11:24.987 [2024-11-17 01:31:33.408975] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.988 [2024-11-17 01:31:33.409220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68663 ] 00:11:25.249 [2024-11-17 01:31:33.569513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.249 [2024-11-17 01:31:33.682612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.800  [2024-11-17T01:31:36.830Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-17T01:31:37.773Z] Copying: 506/1024 [MB] (278 MBps) [2024-11-17T01:31:38.716Z] Copying: 811/1024 [MB] (304 MBps) [2024-11-17T01:31:40.634Z] Copying: 1024/1024 [MB] (average 277 MBps) 00:11:32.175 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:32.175 01:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:32.175 { 00:11:32.175 "subsystems": [ 00:11:32.175 { 00:11:32.175 "subsystem": "bdev", 00:11:32.175 "config": [ 00:11:32.175 { 00:11:32.175 "params": { 00:11:32.175 "block_size": 512, 00:11:32.175 "num_blocks": 2097152, 00:11:32.175 "name": "malloc0" 00:11:32.175 }, 00:11:32.175 "method": "bdev_malloc_create" 00:11:32.175 }, 00:11:32.175 { 00:11:32.175 "params": { 00:11:32.175 "io_mechanism": "io_uring", 00:11:32.175 "filename": "/dev/nullb0", 00:11:32.175 "name": "null0" 00:11:32.175 }, 00:11:32.175 "method": "bdev_xnvme_create" 00:11:32.175 }, 00:11:32.175 { 00:11:32.175 "method": "bdev_wait_for_examine" 00:11:32.175 } 00:11:32.175 ] 00:11:32.175 } 00:11:32.175 ] 00:11:32.175 } 00:11:32.175 [2024-11-17 01:31:40.441030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:32.175 [2024-11-17 01:31:40.441272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68751 ] 00:11:32.175 [2024-11-17 01:31:40.597754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.436 [2024-11-17 01:31:40.673082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.353  [2024-11-17T01:31:43.755Z] Copying: 312/1024 [MB] (312 MBps) [2024-11-17T01:31:44.698Z] Copying: 624/1024 [MB] (312 MBps) [2024-11-17T01:31:44.698Z] Copying: 938/1024 [MB] (313 MBps) [2024-11-17T01:31:46.608Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:11:38.149 00:11:38.150 01:31:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:38.150 01:31:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:38.150 01:31:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:38.150 01:31:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.410 { 00:11:38.410 "subsystems": [ 00:11:38.410 { 00:11:38.410 "subsystem": "bdev", 00:11:38.410 "config": [ 00:11:38.410 { 00:11:38.410 "params": { 00:11:38.410 "block_size": 512, 00:11:38.411 "num_blocks": 2097152, 00:11:38.411 "name": "malloc0" 00:11:38.411 }, 00:11:38.411 "method": "bdev_malloc_create" 00:11:38.411 }, 00:11:38.411 { 00:11:38.411 "params": { 00:11:38.411 "io_mechanism": "io_uring", 00:11:38.411 "filename": "/dev/nullb0", 00:11:38.411 "name": "null0" 00:11:38.411 }, 00:11:38.411 "method": "bdev_xnvme_create" 00:11:38.411 }, 00:11:38.411 { 00:11:38.411 "method": "bdev_wait_for_examine" 00:11:38.411 } 00:11:38.411 ] 00:11:38.411 } 00:11:38.411 ] 00:11:38.411 } 00:11:38.411 [2024-11-17 01:31:46.658384] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:38.411 [2024-11-17 01:31:46.658503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68827 ] 00:11:38.411 [2024-11-17 01:31:46.814082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.672 [2024-11-17 01:31:46.889525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.582  [2024-11-17T01:31:49.980Z] Copying: 316/1024 [MB] (316 MBps) [2024-11-17T01:31:50.922Z] Copying: 632/1024 [MB] (316 MBps) [2024-11-17T01:31:50.922Z] Copying: 947/1024 [MB] (314 MBps) [2024-11-17T01:31:52.866Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:11:44.407 00:11:44.407 01:31:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:44.407 01:31:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:44.407 ************************************ 00:11:44.407 END TEST xnvme_to_malloc_dd_copy 00:11:44.407 ************************************ 00:11:44.407 00:11:44.407 real 0m26.679s 00:11:44.407 user 0m23.239s 00:11:44.407 sys 0m2.907s 00:11:44.407 01:31:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.408 01:31:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 01:31:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:44.408 01:31:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.408 01:31:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.408 01:31:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 ************************************ 00:11:44.408 START TEST xnvme_bdevperf 00:11:44.408 ************************************ 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:44.408 01:31:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:44.698 { 00:11:44.698 "subsystems": [ 00:11:44.698 { 00:11:44.698 "subsystem": "bdev", 00:11:44.698 "config": [ 00:11:44.698 { 00:11:44.698 "params": { 00:11:44.698 "io_mechanism": "libaio", 00:11:44.698 "filename": "/dev/nullb0", 00:11:44.698 "name": "null0" 00:11:44.698 }, 00:11:44.698 "method": "bdev_xnvme_create" 00:11:44.698 }, 00:11:44.698 { 00:11:44.698 "method": "bdev_wait_for_examine" 00:11:44.698 } 00:11:44.698 ] 00:11:44.698 } 00:11:44.698 ] 00:11:44.698 } 00:11:44.698 [2024-11-17 01:31:52.914715] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:44.698 [2024-11-17 01:31:52.914840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68925 ] 00:11:44.698 [2024-11-17 01:31:53.074123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.968 [2024-11-17 01:31:53.158809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.968 Running I/O for 5 seconds... 00:11:47.297 202880.00 IOPS, 792.50 MiB/s [2024-11-17T01:31:56.699Z] 203040.00 IOPS, 793.12 MiB/s [2024-11-17T01:31:57.643Z] 203050.67 IOPS, 793.17 MiB/s [2024-11-17T01:31:58.587Z] 203104.00 IOPS, 793.38 MiB/s 00:11:50.128 Latency(us) 00:11:50.128 [2024-11-17T01:31:58.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.128 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:50.128 null0 : 5.00 203100.21 793.36 0.00 0.00 312.77 107.13 1550.18 00:11:50.128 [2024-11-17T01:31:58.587Z] =================================================================================================================== 00:11:50.128 [2024-11-17T01:31:58.587Z] Total : 203100.21 793.36 0.00 0.00 312.77 107.13 1550.18 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:50.700 01:31:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:50.700 { 00:11:50.700 "subsystems": [ 00:11:50.700 { 00:11:50.700 "subsystem": "bdev", 00:11:50.700 "config": [ 00:11:50.700 { 00:11:50.700 "params": { 00:11:50.700 "io_mechanism": "io_uring", 00:11:50.700 "filename": "/dev/nullb0", 00:11:50.700 "name": "null0" 00:11:50.700 }, 00:11:50.700 "method": "bdev_xnvme_create" 00:11:50.700 }, 00:11:50.700 { 00:11:50.700 "method": "bdev_wait_for_examine" 00:11:50.700 } 00:11:50.700 ] 00:11:50.700 } 00:11:50.700 ] 00:11:50.700 } 00:11:50.700 [2024-11-17 01:31:58.998857] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:50.701 [2024-11-17 01:31:58.998975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68996 ] 00:11:50.701 [2024-11-17 01:31:59.156817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.960 [2024-11-17 01:31:59.241386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.220 Running I/O for 5 seconds... 00:11:53.105 229312.00 IOPS, 895.75 MiB/s [2024-11-17T01:32:02.507Z] 229248.00 IOPS, 895.50 MiB/s [2024-11-17T01:32:03.450Z] 229461.33 IOPS, 896.33 MiB/s [2024-11-17T01:32:04.836Z] 229584.00 IOPS, 896.81 MiB/s 00:11:56.377 Latency(us) 00:11:56.377 [2024-11-17T01:32:04.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.377 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:56.377 null0 : 5.00 229612.84 896.93 0.00 0.00 276.33 146.51 1518.67 00:11:56.377 [2024-11-17T01:32:04.836Z] =================================================================================================================== 00:11:56.377 [2024-11-17T01:32:04.836Z] Total : 229612.84 896.93 0.00 0.00 276.33 146.51 1518.67 00:11:56.637 01:32:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:56.637 01:32:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:56.637 00:11:56.637 real 0m12.171s 00:11:56.637 user 0m9.858s 00:11:56.637 sys 0m2.073s 00:11:56.637 01:32:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.637 ************************************ 00:11:56.637 END TEST xnvme_bdevperf 00:11:56.637 ************************************ 00:11:56.637 01:32:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:56.637 00:11:56.637 real 0m39.135s 00:11:56.637 user 0m33.206s 00:11:56.637 sys 0m5.109s 00:11:56.637 ************************************ 00:11:56.637 END TEST nvme_xnvme 00:11:56.637 01:32:05 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.637 01:32:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.637 ************************************ 00:11:56.637 01:32:05 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:56.637 01:32:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.637 01:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.637 01:32:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.899 ************************************ 00:11:56.899 START TEST blockdev_xnvme 00:11:56.899 ************************************ 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:56.899 * Looking for test storage... 00:11:56.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.899 01:32:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.899 --rc genhtml_branch_coverage=1 00:11:56.899 --rc genhtml_function_coverage=1 00:11:56.899 --rc genhtml_legend=1 00:11:56.899 --rc geninfo_all_blocks=1 00:11:56.899 --rc geninfo_unexecuted_blocks=1 00:11:56.899 00:11:56.899 ' 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.899 --rc genhtml_branch_coverage=1 00:11:56.899 --rc genhtml_function_coverage=1 00:11:56.899 --rc genhtml_legend=1 00:11:56.899 --rc geninfo_all_blocks=1 00:11:56.899 --rc geninfo_unexecuted_blocks=1 00:11:56.899 00:11:56.899 ' 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.899 --rc genhtml_branch_coverage=1 00:11:56.899 --rc genhtml_function_coverage=1 00:11:56.899 --rc genhtml_legend=1 00:11:56.899 --rc geninfo_all_blocks=1 00:11:56.899 --rc geninfo_unexecuted_blocks=1 00:11:56.899 00:11:56.899 ' 00:11:56.899 01:32:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.899 --rc genhtml_branch_coverage=1 00:11:56.899 --rc genhtml_function_coverage=1 00:11:56.899 --rc genhtml_legend=1 00:11:56.899 --rc geninfo_all_blocks=1 00:11:56.899 --rc geninfo_unexecuted_blocks=1 00:11:56.899 00:11:56.899 ' 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:56.899 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69139 00:11:56.900 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:56.900 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69139 00:11:56.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69139 ']' 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.900 01:32:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.900 01:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.900 [2024-11-17 01:32:05.349193] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:56.900 [2024-11-17 01:32:05.349339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69139 ] 00:11:57.161 [2024-11-17 01:32:05.514189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.423 [2024-11-17 01:32:05.636447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.995 01:32:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.995 01:32:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:11:57.995 01:32:06 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:57.995 01:32:06 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:11:57.995 01:32:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:11:57.995 01:32:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:11:57.995 01:32:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.518 Waiting for block devices as requested 00:11:58.518 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.518 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.780 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.780 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.071 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:04.071 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:04.071 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:04.072 nvme0n1 00:12:04.072 nvme1n1 00:12:04.072 nvme2n1 00:12:04.072 nvme2n2 00:12:04.072 nvme2n3 00:12:04.072 nvme3n1 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 01:32:12 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:04.072 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:04.073 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "113d33fd-b296-4e30-891e-3aea9c71f8d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "113d33fd-b296-4e30-891e-3aea9c71f8d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "56216d81-4e50-4890-bcdd-44f0ad651053"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "56216d81-4e50-4890-bcdd-44f0ad651053",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "08780f64-5f65-4fe8-90eb-a19d5e22f9e2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08780f64-5f65-4fe8-90eb-a19d5e22f9e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "079b9b13-d6cb-4268-9f4e-81f85512fb60"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "079b9b13-d6cb-4268-9f4e-81f85512fb60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "7d8b445e-6972-4199-b411-1b50431a148c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d8b445e-6972-4199-b411-1b50431a148c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b07e8d64-f58f-4429-8bed-739611ec356c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b07e8d64-f58f-4429-8bed-739611ec356c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:04.073 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:04.073 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:04.073 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:04.073 01:32:12 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69139 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69139 ']' 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69139 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69139 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.073 killing process with pid 69139 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69139' 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69139 00:12:04.073 01:32:12 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69139 00:12:05.458 01:32:13 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:05.458 01:32:13 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:05.458 01:32:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:05.458 01:32:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.458 01:32:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.458 ************************************ 00:12:05.458 START TEST bdev_hello_world 00:12:05.458 ************************************ 00:12:05.458 01:32:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:05.458 [2024-11-17 01:32:13.615627] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:05.458 [2024-11-17 01:32:13.615742] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69497 ] 00:12:05.458 [2024-11-17 01:32:13.780625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.458 [2024-11-17 01:32:13.853386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.719 [2024-11-17 01:32:14.134388] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:05.719 [2024-11-17 01:32:14.134422] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:05.719 [2024-11-17 01:32:14.134434] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:05.719 [2024-11-17 01:32:14.135893] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:05.719 [2024-11-17 01:32:14.136197] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:05.719 [2024-11-17 01:32:14.136214] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:05.719 [2024-11-17 01:32:14.136329] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:05.720 00:12:05.720 [2024-11-17 01:32:14.136341] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:06.292 00:12:06.292 real 0m1.133s 00:12:06.292 user 0m0.850s 00:12:06.292 sys 0m0.171s 00:12:06.292 01:32:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.292 ************************************ 00:12:06.292 END TEST bdev_hello_world 00:12:06.292 ************************************ 00:12:06.292 01:32:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:06.292 01:32:14 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:06.292 01:32:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.292 01:32:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.292 01:32:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.292 ************************************ 00:12:06.292 START TEST bdev_bounds 00:12:06.292 ************************************ 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69528 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:06.292 Process bdevio pid: 69528 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69528' 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69528 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69528 ']' 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.292 01:32:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:06.553 [2024-11-17 01:32:14.810360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:06.553 [2024-11-17 01:32:14.810485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69528 ] 00:12:06.553 [2024-11-17 01:32:14.970559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.814 [2024-11-17 01:32:15.059069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.814 [2024-11-17 01:32:15.059432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.814 [2024-11-17 01:32:15.059448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.385 01:32:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.385 01:32:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:07.385 01:32:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:07.385 I/O targets: 00:12:07.385 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:07.385 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:07.385 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:07.385 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:07.385 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:07.385 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:07.385 00:12:07.385 00:12:07.385 CUnit - A unit testing framework for C - Version 2.1-3 00:12:07.385 http://cunit.sourceforge.net/ 00:12:07.385 00:12:07.385 00:12:07.385 Suite: bdevio tests on: nvme3n1 00:12:07.385 Test: blockdev write read block ...passed 00:12:07.385 Test: blockdev write zeroes read block ...passed 00:12:07.385 Test: blockdev write zeroes read no split ...passed 00:12:07.385 Test: blockdev write zeroes read split ...passed 00:12:07.385 Test: blockdev write zeroes read split partial ...passed 00:12:07.385 Test: blockdev reset ...passed 00:12:07.385 Test: blockdev write read 8 blocks ...passed 00:12:07.385 Test: blockdev write read size > 128k ...passed 00:12:07.385 Test: blockdev write read invalid size ...passed 00:12:07.385 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.385 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.385 Test: blockdev write read max offset ...passed 00:12:07.385 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.385 Test: blockdev writev readv 8 blocks ...passed 00:12:07.385 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.385 Test: blockdev writev readv block ...passed 00:12:07.385 Test: blockdev writev readv size > 128k ...passed 00:12:07.385 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.385 Test: blockdev comparev and writev ...passed 00:12:07.385 Test: blockdev nvme passthru rw ...passed 00:12:07.385 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.385 Test: blockdev nvme admin passthru ...passed 00:12:07.385 Test: blockdev copy ...passed 00:12:07.385 Suite: bdevio tests on: nvme2n3 00:12:07.385 Test: blockdev write read block ...passed 00:12:07.385 Test: blockdev write zeroes read block ...passed 00:12:07.385 Test: blockdev write zeroes read no split ...passed 00:12:07.385 Test: blockdev write zeroes read split ...passed 00:12:07.385 Test: blockdev write zeroes read split partial ...passed 00:12:07.385 Test: blockdev reset ...passed 00:12:07.386 Test: blockdev write read 8 blocks ...passed 00:12:07.386 Test: blockdev write read size > 128k ...passed 00:12:07.386 Test: blockdev write read invalid size ...passed 00:12:07.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.386 Test: blockdev write read max offset ...passed 00:12:07.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.386 Test: blockdev writev readv 8 blocks ...passed 00:12:07.386 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.386 Test: blockdev writev readv block ...passed 00:12:07.386 Test: blockdev writev readv size > 128k ...passed 00:12:07.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.386 Test: blockdev comparev and writev ...passed 00:12:07.386 Test: blockdev nvme passthru rw ...passed 00:12:07.386 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.386 Test: blockdev nvme admin passthru ...passed 00:12:07.386 Test: blockdev copy ...passed 00:12:07.386 Suite: bdevio tests on: nvme2n2 00:12:07.386 Test: blockdev write read block ...passed 00:12:07.386 Test: blockdev write zeroes read block ...passed 00:12:07.647 Test: blockdev write zeroes read no split ...passed 00:12:07.647 Test: blockdev write zeroes read split ...passed 00:12:07.647 Test: blockdev write zeroes read split partial ...passed 00:12:07.647 Test: blockdev reset ...passed 00:12:07.647 Test: blockdev write read 8 blocks ...passed 00:12:07.647 Test: blockdev write read size > 128k ...passed 00:12:07.647 Test: blockdev write read invalid size ...passed 00:12:07.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.647 Test: blockdev write read max offset ...passed 00:12:07.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.647 Test: blockdev writev readv 8 blocks ...passed 00:12:07.647 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.647 Test: blockdev writev readv block ...passed 00:12:07.647 Test: blockdev writev readv size > 128k ...passed 00:12:07.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.647 Test: blockdev comparev and writev ...passed 00:12:07.647 Test: blockdev nvme passthru rw ...passed 00:12:07.647 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.647 Test: blockdev nvme admin passthru ...passed 00:12:07.647 Test: blockdev copy ...passed 00:12:07.647 Suite: bdevio tests on: nvme2n1 00:12:07.647 Test: blockdev write read block ...passed 00:12:07.647 Test: blockdev write zeroes read block ...passed 00:12:07.647 Test: blockdev write zeroes read no split ...passed 00:12:07.647 Test: blockdev write zeroes read split ...passed 00:12:07.647 Test: blockdev write zeroes read split partial ...passed 00:12:07.647 Test: blockdev reset ...passed 00:12:07.647 Test: blockdev write read 8 blocks ...passed 00:12:07.647 Test: blockdev write read size > 128k ...passed 00:12:07.647 Test: blockdev write read invalid size ...passed 00:12:07.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.647 Test: blockdev write read max offset ...passed 00:12:07.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.647 Test: blockdev writev readv 8 blocks ...passed 00:12:07.647 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.647 Test: blockdev writev readv block ...passed 00:12:07.647 Test: blockdev writev readv size > 128k ...passed 00:12:07.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.647 Test: blockdev comparev and writev ...passed 00:12:07.647 Test: blockdev nvme passthru rw ...passed 00:12:07.647 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.647 Test: blockdev nvme admin passthru ...passed 00:12:07.647 Test: blockdev copy ...passed 00:12:07.647 Suite: bdevio tests on: nvme1n1 00:12:07.647 Test: blockdev write read block ...passed 00:12:07.647 Test: blockdev write zeroes read block ...passed 00:12:07.647 Test: blockdev write zeroes read no split ...passed 00:12:07.647 Test: blockdev write zeroes read split ...passed 00:12:07.647 Test: blockdev write zeroes read split partial ...passed 00:12:07.647 Test: blockdev reset ...passed 00:12:07.647 Test: blockdev write read 8 blocks ...passed 00:12:07.647 Test: blockdev write read size > 128k ...passed 00:12:07.647 Test: blockdev write read invalid size ...passed 00:12:07.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.648 Test: blockdev write read max offset ...passed 00:12:07.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.648 Test: blockdev writev readv 8 blocks ...passed 00:12:07.648 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.648 Test: blockdev writev readv block ...passed 00:12:07.648 Test: blockdev writev readv size > 128k ...passed 00:12:07.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.648 Test: blockdev comparev and writev ...passed 00:12:07.648 Test: blockdev nvme passthru rw ...passed 00:12:07.648 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.648 Test: blockdev nvme admin passthru ...passed 00:12:07.648 Test: blockdev copy ...passed 00:12:07.648 Suite: bdevio tests on: nvme0n1 00:12:07.648 Test: blockdev write read block ...passed 00:12:07.648 Test: blockdev write zeroes read block ...passed 00:12:07.648 Test: blockdev write zeroes read no split ...passed 00:12:07.648 Test: blockdev write zeroes read split ...passed 00:12:07.648 Test: blockdev write zeroes read split partial ...passed 00:12:07.648 Test: blockdev reset ...passed 00:12:07.648 Test: blockdev write read 8 blocks ...passed 00:12:07.648 Test: blockdev write read size > 128k ...passed 00:12:07.648 Test: blockdev write read invalid size ...passed 00:12:07.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.648 Test: blockdev write read max offset ...passed 00:12:07.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.648 Test: blockdev writev readv 8 blocks ...passed 00:12:07.648 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.648 Test: blockdev writev readv block ...passed 00:12:07.648 Test: blockdev writev readv size > 128k ...passed 00:12:07.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.648 Test: blockdev comparev and writev ...passed 00:12:07.648 Test: blockdev nvme passthru rw ...passed 00:12:07.648 Test: blockdev nvme passthru vendor specific ...passed 00:12:07.648 Test: blockdev nvme admin passthru ...passed 00:12:07.648 Test: blockdev copy ...passed 00:12:07.648 00:12:07.648 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.648 suites 6 6 n/a 0 0 00:12:07.648 tests 138 138 138 0 0 00:12:07.648 asserts 780 780 780 0 n/a 00:12:07.648 00:12:07.648 Elapsed time = 0.814 seconds 00:12:07.648 0 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69528 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69528 ']' 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69528 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69528 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69528' 00:12:07.648 killing process with pid 69528 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69528 00:12:07.648 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69528 00:12:08.591 01:32:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:08.591 00:12:08.591 real 0m2.094s 00:12:08.591 user 0m5.179s 00:12:08.591 sys 0m0.286s 00:12:08.591 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.591 ************************************ 00:12:08.591 END TEST bdev_bounds 00:12:08.591 01:32:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:08.591 ************************************ 00:12:08.591 01:32:16 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:08.591 01:32:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:08.591 01:32:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.591 01:32:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:08.591 ************************************ 00:12:08.591 START TEST bdev_nbd 00:12:08.591 ************************************ 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:08.591 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69586 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69586 /var/tmp/spdk-nbd.sock 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69586 ']' 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:08.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:08.592 01:32:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:08.592 [2024-11-17 01:32:17.002913] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:08.592 [2024-11-17 01:32:17.003316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.853 [2024-11-17 01:32:17.171665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.115 [2024-11-17 01:32:17.326080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:09.688 01:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:09.688 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:09.688 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:09.688 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.689 1+0 records in 00:12:09.689 1+0 records out 00:12:09.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502393 s, 8.2 MB/s 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:09.689 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.950 1+0 records in 00:12:09.950 1+0 records out 00:12:09.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011049 s, 3.7 MB/s 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:09.950 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.212 1+0 records in 00:12:10.212 1+0 records out 00:12:10.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104753 s, 3.9 MB/s 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:10.212 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.474 1+0 records in 00:12:10.474 1+0 records out 00:12:10.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109792 s, 3.7 MB/s 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:10.474 01:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:10.735 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:10.735 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:10.735 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:10.735 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:10.735 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.736 1+0 records in 00:12:10.736 1+0 records out 00:12:10.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940029 s, 4.4 MB/s 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:10.736 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.996 1+0 records in 00:12:10.996 1+0 records out 00:12:10.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112863 s, 3.6 MB/s 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:10.996 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd0", 00:12:11.257 "bdev_name": "nvme0n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd1", 00:12:11.257 "bdev_name": "nvme1n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd2", 00:12:11.257 "bdev_name": "nvme2n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd3", 00:12:11.257 "bdev_name": "nvme2n2" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd4", 00:12:11.257 "bdev_name": "nvme2n3" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd5", 00:12:11.257 "bdev_name": "nvme3n1" 00:12:11.257 } 00:12:11.257 ]' 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd0", 00:12:11.257 "bdev_name": "nvme0n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd1", 00:12:11.257 "bdev_name": "nvme1n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd2", 00:12:11.257 "bdev_name": "nvme2n1" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd3", 00:12:11.257 "bdev_name": "nvme2n2" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd4", 00:12:11.257 "bdev_name": "nvme2n3" 00:12:11.257 }, 00:12:11.257 { 00:12:11.257 "nbd_device": "/dev/nbd5", 00:12:11.257 "bdev_name": "nvme3n1" 00:12:11.257 } 00:12:11.257 ]' 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.257 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.518 01:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.793 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.066 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.327 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.588 01:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:12.850 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:13.111 /dev/nbd0 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.111 1+0 records in 00:12:13.111 1+0 records out 00:12:13.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060852 s, 6.7 MB/s 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.111 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:13.371 /dev/nbd1 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.371 1+0 records in 00:12:13.371 1+0 records out 00:12:13.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500149 s, 8.2 MB/s 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.371 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:13.632 /dev/nbd10 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.632 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.632 1+0 records in 00:12:13.632 1+0 records out 00:12:13.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651893 s, 6.3 MB/s 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.633 01:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:13.894 /dev/nbd11 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.894 1+0 records in 00:12:13.894 1+0 records out 00:12:13.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429806 s, 9.5 MB/s 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:13.894 /dev/nbd12 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.894 1+0 records in 00:12:13.894 1+0 records out 00:12:13.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930299 s, 4.4 MB/s 00:12:13.894 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:14.155 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:14.155 /dev/nbd13 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.156 1+0 records in 00:12:14.156 1+0 records out 00:12:14.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487756 s, 8.4 MB/s 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.156 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:14.417 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:14.417 { 00:12:14.417 "nbd_device": "/dev/nbd0", 00:12:14.417 "bdev_name": "nvme0n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd1", 00:12:14.418 "bdev_name": "nvme1n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd10", 00:12:14.418 "bdev_name": "nvme2n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd11", 00:12:14.418 "bdev_name": "nvme2n2" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd12", 00:12:14.418 "bdev_name": "nvme2n3" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd13", 00:12:14.418 "bdev_name": "nvme3n1" 00:12:14.418 } 00:12:14.418 ]' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd0", 00:12:14.418 "bdev_name": "nvme0n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd1", 00:12:14.418 "bdev_name": "nvme1n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd10", 00:12:14.418 "bdev_name": "nvme2n1" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd11", 00:12:14.418 "bdev_name": "nvme2n2" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd12", 00:12:14.418 "bdev_name": "nvme2n3" 00:12:14.418 }, 00:12:14.418 { 00:12:14.418 "nbd_device": "/dev/nbd13", 00:12:14.418 "bdev_name": "nvme3n1" 00:12:14.418 } 00:12:14.418 ]' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:14.418 /dev/nbd1 00:12:14.418 /dev/nbd10 00:12:14.418 /dev/nbd11 00:12:14.418 /dev/nbd12 00:12:14.418 /dev/nbd13' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:14.418 /dev/nbd1 00:12:14.418 /dev/nbd10 00:12:14.418 /dev/nbd11 00:12:14.418 /dev/nbd12 00:12:14.418 /dev/nbd13' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:14.418 256+0 records in 00:12:14.418 256+0 records out 00:12:14.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00817377 s, 128 MB/s 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.418 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:14.680 256+0 records in 00:12:14.680 256+0 records out 00:12:14.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129947 s, 8.1 MB/s 00:12:14.680 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.680 01:32:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:14.942 256+0 records in 00:12:14.942 256+0 records out 00:12:14.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.281416 s, 3.7 MB/s 00:12:14.942 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.942 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:15.204 256+0 records in 00:12:15.204 256+0 records out 00:12:15.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199717 s, 5.3 MB/s 00:12:15.204 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.204 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:15.467 256+0 records in 00:12:15.467 256+0 records out 00:12:15.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238917 s, 4.4 MB/s 00:12:15.467 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.467 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:15.729 256+0 records in 00:12:15.729 256+0 records out 00:12:15.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225652 s, 4.6 MB/s 00:12:15.729 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:15.729 256+0 records in 00:12:15.729 256+0 records out 00:12:15.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123415 s, 8.5 MB/s 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.729 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.991 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.251 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:16.252 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.252 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.252 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.252 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:16.512 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:16.512 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:16.512 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:16.512 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.512 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.513 01:32:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.774 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.035 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:17.296 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:17.557 malloc_lvol_verify 00:12:17.557 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:17.557 69fe0e93-5bde-416c-aa78-0b18697b11b7 00:12:17.557 01:32:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:17.818 8c3d0d5e-9f6f-478c-a005-a0f153d705c7 00:12:17.818 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:18.078 /dev/nbd0 00:12:18.078 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:18.079 mke2fs 1.47.0 (5-Feb-2023) 00:12:18.079 Discarding device blocks: 0/4096 done 00:12:18.079 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:18.079 00:12:18.079 Allocating group tables: 0/1 done 00:12:18.079 Writing inode tables: 0/1 done 00:12:18.079 Creating journal (1024 blocks): done 00:12:18.079 Writing superblocks and filesystem accounting information: 0/1 done 00:12:18.079 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.079 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69586 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69586 ']' 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69586 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69586 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.340 killing process with pid 69586 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69586' 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69586 00:12:18.340 01:32:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69586 00:12:18.913 01:32:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:18.913 00:12:18.913 real 0m10.270s 00:12:18.913 user 0m13.957s 00:12:18.913 sys 0m3.551s 00:12:18.913 01:32:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.913 01:32:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:18.913 ************************************ 00:12:18.913 END TEST bdev_nbd 00:12:18.913 ************************************ 00:12:18.913 01:32:27 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:18.913 01:32:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:18.913 01:32:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:18.913 01:32:27 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:18.913 01:32:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.913 01:32:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.913 01:32:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.913 ************************************ 00:12:18.913 START TEST bdev_fio 00:12:18.913 ************************************ 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:12:18.913 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:18.913 ************************************ 00:12:18.913 START TEST bdev_fio_rw_verify 00:12:18.913 ************************************ 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:18.913 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:18.914 01:32:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:19.175 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.175 fio-3.35 00:12:19.175 Starting 6 threads 00:12:31.420 00:12:31.420 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69988: Sun Nov 17 01:32:38 2024 00:12:31.420 read: IOPS=17.2k, BW=67.1MiB/s (70.3MB/s)(671MiB/10002msec) 00:12:31.420 slat (usec): min=2, max=1931, avg= 6.22, stdev=15.66 00:12:31.420 clat (usec): min=63, max=319848, avg=1098.49, stdev=2295.78 00:12:31.420 lat (usec): min=68, max=319853, avg=1104.72, stdev=2296.03 00:12:31.420 clat percentiles (usec): 00:12:31.420 | 50.000th=[ 947], 99.000th=[ 3458], 99.900th=[ 4752], 00:12:31.420 | 99.990th=[ 5735], 99.999th=[320865] 00:12:31.420 write: IOPS=17.5k, BW=68.5MiB/s (71.8MB/s)(685MiB/10002msec); 0 zone resets 00:12:31.420 slat (usec): min=9, max=4327, avg=40.56, stdev=137.53 00:12:31.420 clat (usec): min=73, max=9413, avg=1345.22, stdev=818.58 00:12:31.420 lat (usec): min=88, max=9446, avg=1385.78, stdev=833.48 00:12:31.420 clat percentiles (usec): 00:12:31.420 | 50.000th=[ 1188], 99.000th=[ 3916], 99.900th=[ 5473], 99.990th=[ 7898], 00:12:31.420 | 99.999th=[ 9372] 00:12:31.420 bw ( KiB/s): min=48406, max=119100, per=100.00%, avg=70292.74, stdev=3195.74, samples=114 00:12:31.420 iops : min=12100, max=29775, avg=17572.63, stdev=798.91, samples=114 00:12:31.420 lat (usec) : 100=0.02%, 250=5.22%, 500=12.55%, 750=13.98%, 1000=14.24% 00:12:31.420 lat (msec) : 2=39.33%, 4=14.01%, 10=0.64%, 500=0.01% 00:12:31.420 cpu : usr=41.23%, sys=33.55%, ctx=6068, majf=0, minf=16602 00:12:31.420 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.420 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.420 issued rwts: total=171717,175413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:31.420 00:12:31.420 Run status group 0 (all jobs): 00:12:31.420 READ: bw=67.1MiB/s (70.3MB/s), 67.1MiB/s-67.1MiB/s (70.3MB/s-70.3MB/s), io=671MiB (703MB), run=10002-10002msec 00:12:31.420 WRITE: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=685MiB (718MB), run=10002-10002msec 00:12:31.420 ----------------------------------------------------- 00:12:31.420 Suppressions used: 00:12:31.420 count bytes template 00:12:31.420 6 48 /usr/src/fio/parse.c 00:12:31.420 3577 343392 /usr/src/fio/iolog.c 00:12:31.420 1 8 libtcmalloc_minimal.so 00:12:31.420 1 904 libcrypto.so 00:12:31.420 ----------------------------------------------------- 00:12:31.420 00:12:31.420 00:12:31.420 real 0m11.791s 00:12:31.420 user 0m26.109s 00:12:31.420 sys 0m20.437s 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:31.420 ************************************ 00:12:31.420 END TEST bdev_fio_rw_verify 00:12:31.420 ************************************ 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:12:31.420 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "113d33fd-b296-4e30-891e-3aea9c71f8d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "113d33fd-b296-4e30-891e-3aea9c71f8d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "56216d81-4e50-4890-bcdd-44f0ad651053"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "56216d81-4e50-4890-bcdd-44f0ad651053",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "08780f64-5f65-4fe8-90eb-a19d5e22f9e2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08780f64-5f65-4fe8-90eb-a19d5e22f9e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "079b9b13-d6cb-4268-9f4e-81f85512fb60"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "079b9b13-d6cb-4268-9f4e-81f85512fb60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "7d8b445e-6972-4199-b411-1b50431a148c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d8b445e-6972-4199-b411-1b50431a148c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b07e8d64-f58f-4429-8bed-739611ec356c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b07e8d64-f58f-4429-8bed-739611ec356c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:31.421 /home/vagrant/spdk_repo/spdk 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:31.421 00:12:31.421 real 0m11.954s 00:12:31.421 user 0m26.195s 00:12:31.421 sys 0m20.502s 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.421 01:32:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:31.421 ************************************ 00:12:31.421 END TEST bdev_fio 00:12:31.421 ************************************ 00:12:31.421 01:32:39 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:31.421 01:32:39 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:31.421 01:32:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:31.421 01:32:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.421 01:32:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.421 ************************************ 00:12:31.421 START TEST bdev_verify 00:12:31.421 ************************************ 00:12:31.421 01:32:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:31.421 [2024-11-17 01:32:39.326045] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:31.421 [2024-11-17 01:32:39.326189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70159 ] 00:12:31.421 [2024-11-17 01:32:39.498592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:31.421 [2024-11-17 01:32:39.615650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.421 [2024-11-17 01:32:39.615714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.682 Running I/O for 5 seconds... 00:12:34.015 24864.00 IOPS, 97.12 MiB/s [2024-11-17T01:32:43.419Z] 23472.00 IOPS, 91.69 MiB/s [2024-11-17T01:32:44.364Z] 23381.33 IOPS, 91.33 MiB/s [2024-11-17T01:32:45.309Z] 23472.00 IOPS, 91.69 MiB/s [2024-11-17T01:32:45.309Z] 23859.20 IOPS, 93.20 MiB/s 00:12:36.850 Latency(us) 00:12:36.850 [2024-11-17T01:32:45.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.850 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0xa0000 00:12:36.850 nvme0n1 : 5.03 1859.11 7.26 0.00 0.00 68716.27 8973.39 64527.75 00:12:36.850 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0xa0000 length 0xa0000 00:12:36.850 nvme0n1 : 5.05 1874.81 7.32 0.00 0.00 68138.26 7208.96 70980.53 00:12:36.850 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0xbd0bd 00:12:36.850 nvme1n1 : 5.06 2435.03 9.51 0.00 0.00 52219.75 6906.49 59688.17 00:12:36.850 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:36.850 nvme1n1 : 5.07 2400.58 9.38 0.00 0.00 53052.42 5545.35 67754.14 00:12:36.850 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0x80000 00:12:36.850 nvme2n1 : 5.08 1916.35 7.49 0.00 0.00 66273.38 9427.10 64124.46 00:12:36.850 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x80000 length 0x80000 00:12:36.850 nvme2n1 : 5.06 1924.22 7.52 0.00 0.00 66172.44 7561.85 70173.93 00:12:36.850 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0x80000 00:12:36.850 nvme2n2 : 5.07 1869.20 7.30 0.00 0.00 67802.85 6704.84 65334.35 00:12:36.850 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x80000 length 0x80000 00:12:36.850 nvme2n2 : 5.06 1873.01 7.32 0.00 0.00 67900.33 11040.30 63721.16 00:12:36.850 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0x80000 00:12:36.850 nvme2n3 : 5.08 1889.31 7.38 0.00 0.00 66983.99 6276.33 65334.35 00:12:36.850 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x80000 length 0x80000 00:12:36.850 nvme2n3 : 5.06 1872.17 7.31 0.00 0.00 67793.82 6452.78 64527.75 00:12:36.850 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x0 length 0x20000 00:12:36.850 nvme3n1 : 5.09 1887.62 7.37 0.00 0.00 66996.16 5494.94 70577.23 00:12:36.850 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.850 Verification LBA range: start 0x20000 length 0x20000 00:12:36.850 nvme3n1 : 5.07 1892.92 7.39 0.00 0.00 66923.54 5167.26 73400.32 00:12:36.850 [2024-11-17T01:32:45.309Z] =================================================================================================================== 00:12:36.850 [2024-11-17T01:32:45.309Z] Total : 23694.33 92.56 0.00 0.00 64355.39 5167.26 73400.32 00:12:37.793 00:12:37.793 real 0m6.733s 00:12:37.793 user 0m10.904s 00:12:37.793 sys 0m1.412s 00:12:37.793 01:32:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.793 ************************************ 00:12:37.793 END TEST bdev_verify 00:12:37.793 ************************************ 00:12:37.793 01:32:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:37.793 01:32:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:37.793 01:32:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:37.793 01:32:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.793 01:32:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.793 ************************************ 00:12:37.793 START TEST bdev_verify_big_io 00:12:37.793 ************************************ 00:12:37.793 01:32:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:37.793 [2024-11-17 01:32:46.130412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:37.794 [2024-11-17 01:32:46.130554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70258 ] 00:12:38.053 [2024-11-17 01:32:46.294347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.054 [2024-11-17 01:32:46.414020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.054 [2024-11-17 01:32:46.414140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.646 Running I/O for 5 seconds... 00:12:44.498 1176.00 IOPS, 73.50 MiB/s [2024-11-17T01:32:52.957Z] 2601.00 IOPS, 162.56 MiB/s [2024-11-17T01:32:52.957Z] 2705.33 IOPS, 169.08 MiB/s 00:12:44.498 Latency(us) 00:12:44.498 [2024-11-17T01:32:52.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.498 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0xa000 00:12:44.498 nvme0n1 : 5.97 89.76 5.61 0.00 0.00 1396564.07 45572.73 2774693.42 00:12:44.498 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0xa000 length 0xa000 00:12:44.498 nvme0n1 : 5.87 139.09 8.69 0.00 0.00 897974.67 95178.44 909841.33 00:12:44.498 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0xbd0b 00:12:44.498 nvme1n1 : 5.98 107.02 6.69 0.00 0.00 1133545.94 9981.64 1290555.08 00:12:44.498 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:44.498 nvme1n1 : 5.87 128.64 8.04 0.00 0.00 942478.66 10889.06 2374621.34 00:12:44.498 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0x8000 00:12:44.498 nvme2n1 : 5.93 83.64 5.23 0.00 0.00 1409934.27 43959.53 2852126.72 00:12:44.498 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x8000 length 0x8000 00:12:44.498 nvme2n1 : 5.86 147.53 9.22 0.00 0.00 805522.63 79853.10 722710.84 00:12:44.498 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0x8000 00:12:44.498 nvme2n2 : 5.98 104.41 6.53 0.00 0.00 1088543.43 28029.24 2013265.92 00:12:44.498 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x8000 length 0x8000 00:12:44.498 nvme2n2 : 5.87 162.17 10.14 0.00 0.00 701971.47 71787.13 800144.15 00:12:44.498 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0x8000 00:12:44.498 nvme2n3 : 5.96 64.40 4.02 0.00 0.00 1695263.25 46580.97 4129776.25 00:12:44.498 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x8000 length 0x8000 00:12:44.498 nvme2n3 : 5.90 113.96 7.12 0.00 0.00 989091.88 93161.94 1632552.17 00:12:44.498 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x0 length 0x2000 00:12:44.498 nvme3n1 : 5.98 104.27 6.52 0.00 0.00 1004254.37 7713.08 2594015.70 00:12:44.498 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:44.498 Verification LBA range: start 0x2000 length 0x2000 00:12:44.498 nvme3n1 : 5.89 184.86 11.55 0.00 0.00 591880.69 9477.51 1206669.00 00:12:44.498 [2024-11-17T01:32:52.957Z] =================================================================================================================== 00:12:44.498 [2024-11-17T01:32:52.957Z] Total : 1429.74 89.36 0.00 0.00 975913.51 7713.08 4129776.25 00:12:45.887 00:12:45.887 real 0m7.876s 00:12:45.887 user 0m14.337s 00:12:45.887 sys 0m0.492s 00:12:45.887 ************************************ 00:12:45.887 END TEST bdev_verify_big_io 00:12:45.887 ************************************ 00:12:45.887 01:32:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.887 01:32:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.887 01:32:53 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:45.887 01:32:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:45.887 01:32:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.887 01:32:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.887 ************************************ 00:12:45.887 START TEST bdev_write_zeroes 00:12:45.887 ************************************ 00:12:45.887 01:32:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:45.887 [2024-11-17 01:32:54.070625] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:45.887 [2024-11-17 01:32:54.071080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70373 ] 00:12:45.887 [2024-11-17 01:32:54.239030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.149 [2024-11-17 01:32:54.356676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.410 Running I/O for 1 seconds... 00:12:47.616 92813.00 IOPS, 362.55 MiB/s 00:12:47.616 Latency(us) 00:12:47.616 [2024-11-17T01:32:56.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.616 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme0n1 : 1.01 15156.25 59.20 0.00 0.00 8435.05 6251.13 19358.33 00:12:47.616 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme1n1 : 1.02 16529.08 64.57 0.00 0.00 7712.70 4889.99 16131.94 00:12:47.616 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme2n1 : 1.01 15134.88 59.12 0.00 0.00 8428.04 6301.54 19963.27 00:12:47.616 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme2n2 : 1.02 15143.07 59.15 0.00 0.00 8368.04 5797.42 20064.10 00:12:47.616 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme2n3 : 1.02 15125.96 59.09 0.00 0.00 8371.79 5873.03 20164.92 00:12:47.616 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:47.616 nvme3n1 : 1.02 15091.58 58.95 0.00 0.00 8380.61 5898.24 20265.75 00:12:47.616 [2024-11-17T01:32:56.075Z] =================================================================================================================== 00:12:47.616 [2024-11-17T01:32:56.075Z] Total : 92180.83 360.08 0.00 0.00 8273.74 4889.99 20265.75 00:12:48.189 00:12:48.189 real 0m2.618s 00:12:48.189 user 0m1.929s 00:12:48.189 sys 0m0.480s 00:12:48.189 01:32:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.189 ************************************ 00:12:48.189 END TEST bdev_write_zeroes 00:12:48.189 ************************************ 00:12:48.189 01:32:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:48.463 01:32:56 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:48.463 01:32:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:48.463 01:32:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.463 01:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.463 ************************************ 00:12:48.463 START TEST bdev_json_nonenclosed 00:12:48.463 ************************************ 00:12:48.463 01:32:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:48.463 [2024-11-17 01:32:56.753727] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:48.463 [2024-11-17 01:32:56.753897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70415 ] 00:12:48.770 [2024-11-17 01:32:56.918811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.770 [2024-11-17 01:32:57.039041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.770 [2024-11-17 01:32:57.039139] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:48.770 [2024-11-17 01:32:57.039158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:48.770 [2024-11-17 01:32:57.039169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.032 00:12:49.032 real 0m0.553s 00:12:49.032 user 0m0.335s 00:12:49.032 sys 0m0.111s 00:12:49.032 01:32:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.032 01:32:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:49.032 ************************************ 00:12:49.032 END TEST bdev_json_nonenclosed 00:12:49.032 ************************************ 00:12:49.032 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:49.032 01:32:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:49.032 01:32:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.032 01:32:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.032 ************************************ 00:12:49.032 START TEST bdev_json_nonarray 00:12:49.032 ************************************ 00:12:49.032 01:32:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:49.032 [2024-11-17 01:32:57.377524] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:49.032 [2024-11-17 01:32:57.377830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70446 ] 00:12:49.293 [2024-11-17 01:32:57.542842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.293 [2024-11-17 01:32:57.659670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.293 [2024-11-17 01:32:57.659783] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:49.293 [2024-11-17 01:32:57.659824] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:49.293 [2024-11-17 01:32:57.659835] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.555 00:12:49.555 real 0m0.545s 00:12:49.555 user 0m0.327s 00:12:49.555 sys 0m0.110s 00:12:49.555 01:32:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.555 ************************************ 00:12:49.555 END TEST bdev_json_nonarray 00:12:49.555 ************************************ 00:12:49.555 01:32:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:12:49.555 01:32:57 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:50.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:55.424 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:55.424 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:55.424 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:55.424 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:55.424 00:12:55.424 real 0m58.635s 00:12:55.424 user 1m23.203s 00:12:55.424 sys 0m43.137s 00:12:55.424 01:33:03 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.424 ************************************ 00:12:55.424 END TEST blockdev_xnvme 00:12:55.424 ************************************ 00:12:55.424 01:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.424 01:33:03 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:55.424 01:33:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.424 01:33:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.424 01:33:03 -- common/autotest_common.sh@10 -- # set +x 00:12:55.424 ************************************ 00:12:55.424 START TEST ublk 00:12:55.424 ************************************ 00:12:55.424 01:33:03 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:55.684 * Looking for test storage... 00:12:55.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:55.684 01:33:03 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.684 01:33:03 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.684 01:33:03 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.684 01:33:03 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.684 01:33:03 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.684 01:33:03 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.684 01:33:03 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.685 01:33:03 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.685 01:33:03 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.685 01:33:03 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.685 01:33:03 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.685 01:33:03 ublk -- scripts/common.sh@344 -- # case "$op" in 00:12:55.685 01:33:03 ublk -- scripts/common.sh@345 -- # : 1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.685 01:33:03 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.685 01:33:03 ublk -- scripts/common.sh@365 -- # decimal 1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@353 -- # local d=1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.685 01:33:03 ublk -- scripts/common.sh@355 -- # echo 1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.685 01:33:03 ublk -- scripts/common.sh@366 -- # decimal 2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@353 -- # local d=2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.685 01:33:03 ublk -- scripts/common.sh@355 -- # echo 2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.685 01:33:03 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.685 01:33:03 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.685 01:33:03 ublk -- scripts/common.sh@368 -- # return 0 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.685 --rc genhtml_branch_coverage=1 00:12:55.685 --rc genhtml_function_coverage=1 00:12:55.685 --rc genhtml_legend=1 00:12:55.685 --rc geninfo_all_blocks=1 00:12:55.685 --rc geninfo_unexecuted_blocks=1 00:12:55.685 00:12:55.685 ' 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.685 --rc genhtml_branch_coverage=1 00:12:55.685 --rc genhtml_function_coverage=1 00:12:55.685 --rc genhtml_legend=1 00:12:55.685 --rc geninfo_all_blocks=1 00:12:55.685 --rc geninfo_unexecuted_blocks=1 00:12:55.685 00:12:55.685 ' 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.685 --rc genhtml_branch_coverage=1 00:12:55.685 --rc genhtml_function_coverage=1 00:12:55.685 --rc genhtml_legend=1 00:12:55.685 --rc geninfo_all_blocks=1 00:12:55.685 --rc geninfo_unexecuted_blocks=1 00:12:55.685 00:12:55.685 ' 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.685 --rc genhtml_branch_coverage=1 00:12:55.685 --rc genhtml_function_coverage=1 00:12:55.685 --rc genhtml_legend=1 00:12:55.685 --rc geninfo_all_blocks=1 00:12:55.685 --rc geninfo_unexecuted_blocks=1 00:12:55.685 00:12:55.685 ' 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:55.685 01:33:03 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:55.685 01:33:03 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:55.685 01:33:03 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:55.685 01:33:03 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:55.685 01:33:03 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:55.685 01:33:03 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:55.685 01:33:03 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:55.685 01:33:03 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:55.685 01:33:03 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.685 01:33:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:55.685 ************************************ 00:12:55.685 START TEST test_save_ublk_config 00:12:55.685 ************************************ 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:12:55.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70736 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70736 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70736 ']' 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.685 01:33:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:55.685 [2024-11-17 01:33:04.107822] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:55.685 [2024-11-17 01:33:04.108170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:12:55.947 [2024-11-17 01:33:04.274109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.947 [2024-11-17 01:33:04.398114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.890 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.890 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:12:56.890 01:33:05 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.891 [2024-11-17 01:33:05.108830] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:12:56.891 [2024-11-17 01:33:05.109774] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:56.891 malloc0 00:12:56.891 [2024-11-17 01:33:05.179955] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:56.891 [2024-11-17 01:33:05.180047] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:56.891 [2024-11-17 01:33:05.180058] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:56.891 [2024-11-17 01:33:05.180067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:56.891 [2024-11-17 01:33:05.187851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:56.891 [2024-11-17 01:33:05.187880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:56.891 [2024-11-17 01:33:05.195830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:56.891 [2024-11-17 01:33:05.195952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:56.891 [2024-11-17 01:33:05.219834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:56.891 0 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.891 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:57.152 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.152 01:33:05 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:12:57.152 "subsystems": [ 00:12:57.152 { 00:12:57.152 "subsystem": "fsdev", 00:12:57.152 "config": [ 00:12:57.152 { 00:12:57.152 "method": "fsdev_set_opts", 00:12:57.152 "params": { 00:12:57.152 "fsdev_io_pool_size": 65535, 00:12:57.152 "fsdev_io_cache_size": 256 00:12:57.152 } 00:12:57.152 } 00:12:57.152 ] 00:12:57.152 }, 00:12:57.152 { 00:12:57.152 "subsystem": "keyring", 00:12:57.152 "config": [] 00:12:57.152 }, 00:12:57.152 { 00:12:57.152 "subsystem": "iobuf", 00:12:57.152 "config": [ 00:12:57.152 { 00:12:57.152 "method": "iobuf_set_options", 00:12:57.152 "params": { 00:12:57.152 "small_pool_count": 8192, 00:12:57.152 "large_pool_count": 1024, 00:12:57.152 "small_bufsize": 8192, 00:12:57.152 "large_bufsize": 135168, 00:12:57.152 "enable_numa": false 00:12:57.152 } 00:12:57.152 } 00:12:57.152 ] 00:12:57.152 }, 00:12:57.152 { 00:12:57.152 "subsystem": "sock", 00:12:57.152 "config": [ 00:12:57.152 { 00:12:57.152 "method": "sock_set_default_impl", 00:12:57.152 "params": { 00:12:57.152 "impl_name": "posix" 00:12:57.152 } 00:12:57.152 }, 00:12:57.152 { 00:12:57.153 "method": "sock_impl_set_options", 00:12:57.153 "params": { 00:12:57.153 "impl_name": "ssl", 00:12:57.153 "recv_buf_size": 4096, 00:12:57.153 "send_buf_size": 4096, 00:12:57.153 "enable_recv_pipe": true, 00:12:57.153 "enable_quickack": false, 00:12:57.153 "enable_placement_id": 0, 00:12:57.153 "enable_zerocopy_send_server": true, 00:12:57.153 "enable_zerocopy_send_client": false, 00:12:57.153 "zerocopy_threshold": 0, 00:12:57.153 "tls_version": 0, 00:12:57.153 "enable_ktls": false 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "sock_impl_set_options", 00:12:57.153 "params": { 00:12:57.153 "impl_name": "posix", 00:12:57.153 "recv_buf_size": 2097152, 00:12:57.153 "send_buf_size": 2097152, 00:12:57.153 "enable_recv_pipe": true, 00:12:57.153 "enable_quickack": false, 00:12:57.153 "enable_placement_id": 0, 00:12:57.153 "enable_zerocopy_send_server": true, 00:12:57.153 "enable_zerocopy_send_client": false, 00:12:57.153 "zerocopy_threshold": 0, 00:12:57.153 "tls_version": 0, 00:12:57.153 "enable_ktls": false 00:12:57.153 } 00:12:57.153 } 00:12:57.153 ] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "vmd", 00:12:57.153 "config": [] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "accel", 00:12:57.153 "config": [ 00:12:57.153 { 00:12:57.153 "method": "accel_set_options", 00:12:57.153 "params": { 00:12:57.153 "small_cache_size": 128, 00:12:57.153 "large_cache_size": 16, 00:12:57.153 "task_count": 2048, 00:12:57.153 "sequence_count": 2048, 00:12:57.153 "buf_count": 2048 00:12:57.153 } 00:12:57.153 } 00:12:57.153 ] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "bdev", 00:12:57.153 "config": [ 00:12:57.153 { 00:12:57.153 "method": "bdev_set_options", 00:12:57.153 "params": { 00:12:57.153 "bdev_io_pool_size": 65535, 00:12:57.153 "bdev_io_cache_size": 256, 00:12:57.153 "bdev_auto_examine": true, 00:12:57.153 "iobuf_small_cache_size": 128, 00:12:57.153 "iobuf_large_cache_size": 16 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_raid_set_options", 00:12:57.153 "params": { 00:12:57.153 "process_window_size_kb": 1024, 00:12:57.153 "process_max_bandwidth_mb_sec": 0 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_iscsi_set_options", 00:12:57.153 "params": { 00:12:57.153 "timeout_sec": 30 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_nvme_set_options", 00:12:57.153 "params": { 00:12:57.153 "action_on_timeout": "none", 00:12:57.153 "timeout_us": 0, 00:12:57.153 "timeout_admin_us": 0, 00:12:57.153 "keep_alive_timeout_ms": 10000, 00:12:57.153 "arbitration_burst": 0, 00:12:57.153 "low_priority_weight": 0, 00:12:57.153 "medium_priority_weight": 0, 00:12:57.153 "high_priority_weight": 0, 00:12:57.153 "nvme_adminq_poll_period_us": 10000, 00:12:57.153 "nvme_ioq_poll_period_us": 0, 00:12:57.153 "io_queue_requests": 0, 00:12:57.153 "delay_cmd_submit": true, 00:12:57.153 "transport_retry_count": 4, 00:12:57.153 "bdev_retry_count": 3, 00:12:57.153 "transport_ack_timeout": 0, 00:12:57.153 "ctrlr_loss_timeout_sec": 0, 00:12:57.153 "reconnect_delay_sec": 0, 00:12:57.153 "fast_io_fail_timeout_sec": 0, 00:12:57.153 "disable_auto_failback": false, 00:12:57.153 "generate_uuids": false, 00:12:57.153 "transport_tos": 0, 00:12:57.153 "nvme_error_stat": false, 00:12:57.153 "rdma_srq_size": 0, 00:12:57.153 "io_path_stat": false, 00:12:57.153 "allow_accel_sequence": false, 00:12:57.153 "rdma_max_cq_size": 0, 00:12:57.153 "rdma_cm_event_timeout_ms": 0, 00:12:57.153 "dhchap_digests": [ 00:12:57.153 "sha256", 00:12:57.153 "sha384", 00:12:57.153 "sha512" 00:12:57.153 ], 00:12:57.153 "dhchap_dhgroups": [ 00:12:57.153 "null", 00:12:57.153 "ffdhe2048", 00:12:57.153 "ffdhe3072", 00:12:57.153 "ffdhe4096", 00:12:57.153 "ffdhe6144", 00:12:57.153 "ffdhe8192" 00:12:57.153 ] 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_nvme_set_hotplug", 00:12:57.153 "params": { 00:12:57.153 "period_us": 100000, 00:12:57.153 "enable": false 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_malloc_create", 00:12:57.153 "params": { 00:12:57.153 "name": "malloc0", 00:12:57.153 "num_blocks": 8192, 00:12:57.153 "block_size": 4096, 00:12:57.153 "physical_block_size": 4096, 00:12:57.153 "uuid": "683d5bd1-a952-4f55-95b0-76e3c3b664a7", 00:12:57.153 "optimal_io_boundary": 0, 00:12:57.153 "md_size": 0, 00:12:57.153 "dif_type": 0, 00:12:57.153 "dif_is_head_of_md": false, 00:12:57.153 "dif_pi_format": 0 00:12:57.153 } 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "method": "bdev_wait_for_examine" 00:12:57.153 } 00:12:57.153 ] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "scsi", 00:12:57.153 "config": null 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "scheduler", 00:12:57.153 "config": [ 00:12:57.153 { 00:12:57.153 "method": "framework_set_scheduler", 00:12:57.153 "params": { 00:12:57.153 "name": "static" 00:12:57.153 } 00:12:57.153 } 00:12:57.153 ] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "vhost_scsi", 00:12:57.153 "config": [] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "vhost_blk", 00:12:57.153 "config": [] 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "subsystem": "ublk", 00:12:57.153 "config": [ 00:12:57.153 { 00:12:57.153 "method": "ublk_create_target", 00:12:57.153 "params": { 00:12:57.154 "cpumask": "1" 00:12:57.154 } 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "method": "ublk_start_disk", 00:12:57.154 "params": { 00:12:57.154 "bdev_name": "malloc0", 00:12:57.154 "ublk_id": 0, 00:12:57.154 "num_queues": 1, 00:12:57.154 "queue_depth": 128 00:12:57.154 } 00:12:57.154 } 00:12:57.154 ] 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "subsystem": "nbd", 00:12:57.154 "config": [] 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "subsystem": "nvmf", 00:12:57.154 "config": [ 00:12:57.154 { 00:12:57.154 "method": "nvmf_set_config", 00:12:57.154 "params": { 00:12:57.154 "discovery_filter": "match_any", 00:12:57.154 "admin_cmd_passthru": { 00:12:57.154 "identify_ctrlr": false 00:12:57.154 }, 00:12:57.154 "dhchap_digests": [ 00:12:57.154 "sha256", 00:12:57.154 "sha384", 00:12:57.154 "sha512" 00:12:57.154 ], 00:12:57.154 "dhchap_dhgroups": [ 00:12:57.154 "null", 00:12:57.154 "ffdhe2048", 00:12:57.154 "ffdhe3072", 00:12:57.154 "ffdhe4096", 00:12:57.154 "ffdhe6144", 00:12:57.154 "ffdhe8192" 00:12:57.154 ] 00:12:57.154 } 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "method": "nvmf_set_max_subsystems", 00:12:57.154 "params": { 00:12:57.154 "max_subsystems": 1024 00:12:57.154 } 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "method": "nvmf_set_crdt", 00:12:57.154 "params": { 00:12:57.154 "crdt1": 0, 00:12:57.154 "crdt2": 0, 00:12:57.154 "crdt3": 0 00:12:57.154 } 00:12:57.154 } 00:12:57.154 ] 00:12:57.154 }, 00:12:57.154 { 00:12:57.154 "subsystem": "iscsi", 00:12:57.154 "config": [ 00:12:57.154 { 00:12:57.154 "method": "iscsi_set_options", 00:12:57.154 "params": { 00:12:57.154 "node_base": "iqn.2016-06.io.spdk", 00:12:57.154 "max_sessions": 128, 00:12:57.154 "max_connections_per_session": 2, 00:12:57.154 "max_queue_depth": 64, 00:12:57.154 "default_time2wait": 2, 00:12:57.154 "default_time2retain": 20, 00:12:57.154 "first_burst_length": 8192, 00:12:57.154 "immediate_data": true, 00:12:57.154 "allow_duplicated_isid": false, 00:12:57.154 "error_recovery_level": 0, 00:12:57.154 "nop_timeout": 60, 00:12:57.154 "nop_in_interval": 30, 00:12:57.154 "disable_chap": false, 00:12:57.154 "require_chap": false, 00:12:57.154 "mutual_chap": false, 00:12:57.154 "chap_group": 0, 00:12:57.154 "max_large_datain_per_connection": 64, 00:12:57.154 "max_r2t_per_connection": 4, 00:12:57.154 "pdu_pool_size": 36864, 00:12:57.154 "immediate_data_pool_size": 16384, 00:12:57.154 "data_out_pool_size": 2048 00:12:57.154 } 00:12:57.154 } 00:12:57.154 ] 00:12:57.154 } 00:12:57.154 ] 00:12:57.154 }' 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70736 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70736 ']' 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70736 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70736 00:12:57.154 killing process with pid 70736 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70736' 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70736 00:12:57.154 01:33:05 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70736 00:12:58.539 [2024-11-17 01:33:06.638658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.539 [2024-11-17 01:33:06.684848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.539 [2024-11-17 01:33:06.684994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.539 [2024-11-17 01:33:06.692840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.539 [2024-11-17 01:33:06.692904] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:58.539 [2024-11-17 01:33:06.692918] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:58.539 [2024-11-17 01:33:06.692942] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:58.539 [2024-11-17 01:33:06.693100] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:59.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70796 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70796 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70796 ']' 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:59.924 01:33:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:12:59.924 "subsystems": [ 00:12:59.924 { 00:12:59.924 "subsystem": "fsdev", 00:12:59.924 "config": [ 00:12:59.924 { 00:12:59.924 "method": "fsdev_set_opts", 00:12:59.924 "params": { 00:12:59.924 "fsdev_io_pool_size": 65535, 00:12:59.924 "fsdev_io_cache_size": 256 00:12:59.924 } 00:12:59.924 } 00:12:59.924 ] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "keyring", 00:12:59.924 "config": [] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "iobuf", 00:12:59.924 "config": [ 00:12:59.924 { 00:12:59.924 "method": "iobuf_set_options", 00:12:59.924 "params": { 00:12:59.924 "small_pool_count": 8192, 00:12:59.924 "large_pool_count": 1024, 00:12:59.924 "small_bufsize": 8192, 00:12:59.924 "large_bufsize": 135168, 00:12:59.924 "enable_numa": false 00:12:59.924 } 00:12:59.924 } 00:12:59.924 ] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "sock", 00:12:59.924 "config": [ 00:12:59.924 { 00:12:59.924 "method": "sock_set_default_impl", 00:12:59.924 "params": { 00:12:59.924 "impl_name": "posix" 00:12:59.924 } 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "method": "sock_impl_set_options", 00:12:59.924 "params": { 00:12:59.924 "impl_name": "ssl", 00:12:59.924 "recv_buf_size": 4096, 00:12:59.924 "send_buf_size": 4096, 00:12:59.924 "enable_recv_pipe": true, 00:12:59.924 "enable_quickack": false, 00:12:59.924 "enable_placement_id": 0, 00:12:59.924 "enable_zerocopy_send_server": true, 00:12:59.924 "enable_zerocopy_send_client": false, 00:12:59.924 "zerocopy_threshold": 0, 00:12:59.924 "tls_version": 0, 00:12:59.924 "enable_ktls": false 00:12:59.924 } 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "method": "sock_impl_set_options", 00:12:59.924 "params": { 00:12:59.924 "impl_name": "posix", 00:12:59.924 "recv_buf_size": 2097152, 00:12:59.924 "send_buf_size": 2097152, 00:12:59.924 "enable_recv_pipe": true, 00:12:59.924 "enable_quickack": false, 00:12:59.924 "enable_placement_id": 0, 00:12:59.924 "enable_zerocopy_send_server": true, 00:12:59.924 "enable_zerocopy_send_client": false, 00:12:59.924 "zerocopy_threshold": 0, 00:12:59.924 "tls_version": 0, 00:12:59.924 "enable_ktls": false 00:12:59.924 } 00:12:59.924 } 00:12:59.924 ] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "vmd", 00:12:59.924 "config": [] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "accel", 00:12:59.924 "config": [ 00:12:59.924 { 00:12:59.924 "method": "accel_set_options", 00:12:59.924 "params": { 00:12:59.924 "small_cache_size": 128, 00:12:59.924 "large_cache_size": 16, 00:12:59.924 "task_count": 2048, 00:12:59.924 "sequence_count": 2048, 00:12:59.924 "buf_count": 2048 00:12:59.924 } 00:12:59.924 } 00:12:59.924 ] 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "subsystem": "bdev", 00:12:59.924 "config": [ 00:12:59.924 { 00:12:59.924 "method": "bdev_set_options", 00:12:59.924 "params": { 00:12:59.924 "bdev_io_pool_size": 65535, 00:12:59.924 "bdev_io_cache_size": 256, 00:12:59.924 "bdev_auto_examine": true, 00:12:59.924 "iobuf_small_cache_size": 128, 00:12:59.924 "iobuf_large_cache_size": 16 00:12:59.924 } 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "method": "bdev_raid_set_options", 00:12:59.924 "params": { 00:12:59.924 "process_window_size_kb": 1024, 00:12:59.924 "process_max_bandwidth_mb_sec": 0 00:12:59.924 } 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "method": "bdev_iscsi_set_options", 00:12:59.924 "params": { 00:12:59.924 "timeout_sec": 30 00:12:59.924 } 00:12:59.924 }, 00:12:59.924 { 00:12:59.924 "method": "bdev_nvme_set_options", 00:12:59.924 "params": { 00:12:59.924 "action_on_timeout": "none", 00:12:59.924 "timeout_us": 0, 00:12:59.924 "timeout_admin_us": 0, 00:12:59.924 "keep_alive_timeout_ms": 10000, 00:12:59.924 "arbitration_burst": 0, 00:12:59.924 "low_priority_weight": 0, 00:12:59.924 "medium_priority_weight": 0, 00:12:59.924 "high_priority_weight": 0, 00:12:59.924 "nvme_adminq_poll_period_us": 10000, 00:12:59.924 "nvme_ioq_poll_period_us": 0, 00:12:59.924 "io_queue_requests": 0, 00:12:59.924 "delay_cmd_submit": true, 00:12:59.924 "transport_retry_count": 4, 00:12:59.924 "bdev_retry_count": 3, 00:12:59.925 "transport_ack_timeout": 0, 00:12:59.925 "ctrlr_loss_timeout_sec": 0, 00:12:59.925 "reconnect_delay_sec": 0, 00:12:59.925 "fast_io_fail_timeout_sec": 0, 00:12:59.925 "disable_auto_failback": false, 00:12:59.925 "generate_uuids": false, 00:12:59.925 "transport_tos": 0, 00:12:59.925 "nvme_error_stat": false, 00:12:59.925 "rdma_srq_size": 0, 00:12:59.925 "io_path_stat": false, 00:12:59.925 "allow_accel_sequence": false, 00:12:59.925 "rdma_max_cq_size": 0, 00:12:59.925 "rdma_cm_event_timeout_ms": 0, 00:12:59.925 "dhchap_digests": [ 00:12:59.925 "sha256", 00:12:59.925 "sha384", 00:12:59.925 "sha512" 00:12:59.925 ], 00:12:59.925 "dhchap_dhgroups": [ 00:12:59.925 "null", 00:12:59.925 "ffdhe2048", 00:12:59.925 "ffdhe3072", 00:12:59.925 "ffdhe4096", 00:12:59.925 "ffdhe6144", 00:12:59.925 "ffdhe8192" 00:12:59.925 ] 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "bdev_nvme_set_hotplug", 00:12:59.925 "params": { 00:12:59.925 "period_us": 100000, 00:12:59.925 "enable": false 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "bdev_malloc_create", 00:12:59.925 "params": { 00:12:59.925 "name": "malloc0", 00:12:59.925 "num_blocks": 8192, 00:12:59.925 "block_size": 4096, 00:12:59.925 "physical_block_size": 4096, 00:12:59.925 "uuid": "683d5bd1-a952-4f55-95b0-76e3c3b664a7", 00:12:59.925 "optimal_io_boundary": 0, 00:12:59.925 "md_size": 0, 00:12:59.925 "dif_type": 0, 00:12:59.925 "dif_is_head_of_md": false, 00:12:59.925 "dif_pi_format": 0 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "bdev_wait_for_examine" 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "scsi", 00:12:59.925 "config": null 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "scheduler", 00:12:59.925 "config": [ 00:12:59.925 { 00:12:59.925 "method": "framework_set_scheduler", 00:12:59.925 "params": { 00:12:59.925 "name": "static" 00:12:59.925 } 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "vhost_scsi", 00:12:59.925 "config": [] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "vhost_blk", 00:12:59.925 "config": [] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "ublk", 00:12:59.925 "config": [ 00:12:59.925 { 00:12:59.925 "method": "ublk_create_target", 00:12:59.925 "params": { 00:12:59.925 "cpumask": "1" 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "ublk_start_disk", 00:12:59.925 "params": { 00:12:59.925 "bdev_name": "malloc0", 00:12:59.925 "ublk_id": 0, 00:12:59.925 "num_queues": 1, 00:12:59.925 "queue_depth": 128 00:12:59.925 } 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "nbd", 00:12:59.925 "config": [] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "nvmf", 00:12:59.925 "config": [ 00:12:59.925 { 00:12:59.925 "method": "nvmf_set_config", 00:12:59.925 "params": { 00:12:59.925 "discovery_filter": "match_any", 00:12:59.925 "admin_cmd_passthru": { 00:12:59.925 "identify_ctrlr": false 00:12:59.925 }, 00:12:59.925 "dhchap_digests": [ 00:12:59.925 "sha256", 00:12:59.925 "sha384", 00:12:59.925 "sha512" 00:12:59.925 ], 00:12:59.925 "dhchap_dhgroups": [ 00:12:59.925 "null", 00:12:59.925 "ffdhe2048", 00:12:59.925 "ffdhe3072", 00:12:59.925 "ffdhe4096", 00:12:59.925 "ffdhe6144", 00:12:59.925 "ffdhe8192" 00:12:59.925 ] 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "nvmf_set_max_subsystems", 00:12:59.925 "params": { 00:12:59.925 "max_subsystems": 1024 00:12:59.925 } 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "method": "nvmf_set_crdt", 00:12:59.925 "params": { 00:12:59.925 "crdt1": 0, 00:12:59.925 "crdt2": 0, 00:12:59.925 "crdt3": 0 00:12:59.925 } 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "subsystem": "iscsi", 00:12:59.925 "config": [ 00:12:59.925 { 00:12:59.925 "method": "iscsi_set_options", 00:12:59.925 "params": { 00:12:59.925 "node_base": "iqn.2016-06.io.spdk", 00:12:59.925 "max_sessions": 128, 00:12:59.925 "max_connections_per_session": 2, 00:12:59.925 "max_queue_depth": 64, 00:12:59.925 "default_time2wait": 2, 00:12:59.925 "default_time2retain": 20, 00:12:59.925 "first_burst_length": 8192, 00:12:59.925 "immediate_data": true, 00:12:59.925 "allow_duplicated_isid": false, 00:12:59.925 "error_recovery_level": 0, 00:12:59.925 "nop_timeout": 60, 00:12:59.925 "nop_in_interval": 30, 00:12:59.925 "disable_chap": false, 00:12:59.925 "require_chap": false, 00:12:59.925 "mutual_chap": false, 00:12:59.925 "chap_group": 0, 00:12:59.925 "max_large_datain_per_connection": 64, 00:12:59.925 "max_r2t_per_connection": 4, 00:12:59.925 "pdu_pool_size": 36864, 00:12:59.925 "immediate_data_pool_size": 16384, 00:12:59.925 "data_out_pool_size": 2048 00:12:59.925 } 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 } 00:12:59.925 ] 00:12:59.925 }' 00:12:59.925 [2024-11-17 01:33:08.348266] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:59.925 [2024-11-17 01:33:08.348564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70796 ] 00:13:00.185 [2024-11-17 01:33:08.501411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.185 [2024-11-17 01:33:08.577455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.125 [2024-11-17 01:33:09.215805] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:01.125 [2024-11-17 01:33:09.216443] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:01.125 [2024-11-17 01:33:09.223896] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:01.125 [2024-11-17 01:33:09.223954] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:01.125 [2024-11-17 01:33:09.223962] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:01.125 [2024-11-17 01:33:09.223967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:01.125 [2024-11-17 01:33:09.232858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:01.126 [2024-11-17 01:33:09.232874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:01.126 [2024-11-17 01:33:09.239811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:01.126 [2024-11-17 01:33:09.239922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:01.126 [2024-11-17 01:33:09.256809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70796 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70796 ']' 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70796 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70796 00:13:01.126 killing process with pid 70796 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70796' 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70796 00:13:01.126 01:33:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70796 00:13:02.069 [2024-11-17 01:33:10.425393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:02.069 [2024-11-17 01:33:10.454822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:02.069 [2024-11-17 01:33:10.454924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:02.069 [2024-11-17 01:33:10.462809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:02.069 [2024-11-17 01:33:10.462851] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:02.069 [2024-11-17 01:33:10.462857] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:02.069 [2024-11-17 01:33:10.462877] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:02.069 [2024-11-17 01:33:10.462983] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:03.454 01:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:03.454 00:13:03.454 real 0m7.609s 00:13:03.454 user 0m5.022s 00:13:03.454 sys 0m3.234s 00:13:03.454 ************************************ 00:13:03.454 END TEST test_save_ublk_config 00:13:03.454 ************************************ 00:13:03.454 01:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.454 01:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:03.454 01:33:11 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70869 00:13:03.454 01:33:11 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:03.454 01:33:11 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70869 00:13:03.454 01:33:11 ublk -- common/autotest_common.sh@835 -- # '[' -z 70869 ']' 00:13:03.454 01:33:11 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:03.454 01:33:11 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.454 01:33:11 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.454 01:33:11 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.455 01:33:11 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.455 01:33:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:03.455 [2024-11-17 01:33:11.743416] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:03.455 [2024-11-17 01:33:11.743530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70869 ] 00:13:03.455 [2024-11-17 01:33:11.903382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.715 [2024-11-17 01:33:12.006232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.715 [2024-11-17 01:33:12.006264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.288 01:33:12 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.288 01:33:12 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:04.288 01:33:12 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:04.288 01:33:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.288 01:33:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.288 01:33:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.288 ************************************ 00:13:04.288 START TEST test_create_ublk 00:13:04.288 ************************************ 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:04.288 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.288 [2024-11-17 01:33:12.725818] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:04.288 [2024-11-17 01:33:12.728082] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.288 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:04.288 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.288 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.550 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:04.550 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.550 [2024-11-17 01:33:12.949987] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:04.550 [2024-11-17 01:33:12.950430] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:04.550 [2024-11-17 01:33:12.950445] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:04.550 [2024-11-17 01:33:12.950453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:04.550 [2024-11-17 01:33:12.959154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:04.550 [2024-11-17 01:33:12.959182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:04.550 [2024-11-17 01:33:12.965826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:04.550 [2024-11-17 01:33:12.972876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:04.550 [2024-11-17 01:33:12.986929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.550 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:04.550 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:04.550 01:33:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.550 01:33:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 01:33:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:04.812 { 00:13:04.812 "ublk_device": "/dev/ublkb0", 00:13:04.812 "id": 0, 00:13:04.812 "queue_depth": 512, 00:13:04.812 "num_queues": 4, 00:13:04.812 "bdev_name": "Malloc0" 00:13:04.812 } 00:13:04.812 ]' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:04.812 01:33:13 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:05.073 fio: verification read phase will never start because write phase uses all of runtime 00:13:05.073 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:05.073 fio-3.35 00:13:05.073 Starting 1 process 00:13:15.069 00:13:15.069 fio_test: (groupid=0, jobs=1): err= 0: pid=70913: Sun Nov 17 01:33:23 2024 00:13:15.069 write: IOPS=17.8k, BW=69.5MiB/s (72.8MB/s)(695MiB/10001msec); 0 zone resets 00:13:15.069 clat (usec): min=32, max=3969, avg=55.51, stdev=80.84 00:13:15.069 lat (usec): min=32, max=3970, avg=55.91, stdev=80.85 00:13:15.069 clat percentiles (usec): 00:13:15.069 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 45], 00:13:15.069 | 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 56], 00:13:15.069 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 67], 00:13:15.069 | 99.00th=[ 78], 99.50th=[ 83], 99.90th=[ 1221], 99.95th=[ 2442], 00:13:15.069 | 99.99th=[ 3425] 00:13:15.069 bw ( KiB/s): min=63920, max=86904, per=100.00%, avg=71410.95, stdev=8604.01, samples=19 00:13:15.069 iops : min=15980, max=21726, avg=17852.74, stdev=2151.00, samples=19 00:13:15.069 lat (usec) : 50=32.75%, 100=67.01%, 250=0.09%, 500=0.02%, 750=0.01% 00:13:15.069 lat (usec) : 1000=0.01% 00:13:15.069 lat (msec) : 2=0.04%, 4=0.07% 00:13:15.069 cpu : usr=2.79%, sys=13.79%, ctx=177827, majf=0, minf=798 00:13:15.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.069 issued rwts: total=0,177827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.069 00:13:15.069 Run status group 0 (all jobs): 00:13:15.069 WRITE: bw=69.5MiB/s (72.8MB/s), 69.5MiB/s-69.5MiB/s (72.8MB/s-72.8MB/s), io=695MiB (728MB), run=10001-10001msec 00:13:15.069 00:13:15.069 Disk stats (read/write): 00:13:15.069 ublkb0: ios=0/176089, merge=0/0, ticks=0/8359, in_queue=8359, util=99.09% 00:13:15.069 01:33:23 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.069 [2024-11-17 01:33:23.433553] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.069 [2024-11-17 01:33:23.460358] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.069 [2024-11-17 01:33:23.461300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.069 [2024-11-17 01:33:23.472850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.069 [2024-11-17 01:33:23.477039] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:15.069 [2024-11-17 01:33:23.477055] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.069 01:33:23 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.069 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.069 [2024-11-17 01:33:23.487869] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:15.069 request: 00:13:15.069 { 00:13:15.069 "ublk_id": 0, 00:13:15.069 "method": "ublk_stop_disk", 00:13:15.069 "req_id": 1 00:13:15.069 } 00:13:15.069 Got JSON-RPC error response 00:13:15.069 response: 00:13:15.069 { 00:13:15.070 "code": -19, 00:13:15.070 "message": "No such device" 00:13:15.070 } 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.070 01:33:23 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.070 [2024-11-17 01:33:23.503877] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:15.070 [2024-11-17 01:33:23.511804] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:15.070 [2024-11-17 01:33:23.511842] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.070 01:33:23 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.070 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.637 01:33:23 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:15.637 ************************************ 00:13:15.637 END TEST test_create_ublk 00:13:15.637 ************************************ 00:13:15.637 01:33:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:15.637 00:13:15.637 real 0m11.280s 00:13:15.637 user 0m0.595s 00:13:15.637 sys 0m1.468s 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.637 01:33:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 01:33:24 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:15.637 01:33:24 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.637 01:33:24 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.637 01:33:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 ************************************ 00:13:15.637 START TEST test_create_multi_ublk 00:13:15.637 ************************************ 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.637 [2024-11-17 01:33:24.047805] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:15.637 [2024-11-17 01:33:24.049518] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.637 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.895 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.895 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:15.895 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:15.895 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.895 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:15.895 [2024-11-17 01:33:24.288027] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:15.895 [2024-11-17 01:33:24.288349] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:15.895 [2024-11-17 01:33:24.288357] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:15.895 [2024-11-17 01:33:24.288366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.895 [2024-11-17 01:33:24.299851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.895 [2024-11-17 01:33:24.299873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.895 [2024-11-17 01:33:24.311822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.895 [2024-11-17 01:33:24.312357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:15.895 [2024-11-17 01:33:24.346821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.154 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.154 [2024-11-17 01:33:24.565918] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:16.154 [2024-11-17 01:33:24.566237] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:16.154 [2024-11-17 01:33:24.566250] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:16.155 [2024-11-17 01:33:24.566256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:16.155 [2024-11-17 01:33:24.573833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:16.155 [2024-11-17 01:33:24.573851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:16.155 [2024-11-17 01:33:24.581822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:16.155 [2024-11-17 01:33:24.582352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:16.155 [2024-11-17 01:33:24.598819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.155 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.414 [2024-11-17 01:33:24.773900] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:16.414 [2024-11-17 01:33:24.774226] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:16.414 [2024-11-17 01:33:24.774238] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:16.414 [2024-11-17 01:33:24.774245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:16.414 [2024-11-17 01:33:24.781835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:16.414 [2024-11-17 01:33:24.781858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:16.414 [2024-11-17 01:33:24.789825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:16.414 [2024-11-17 01:33:24.790346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:16.414 [2024-11-17 01:33:24.798839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.414 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:16.672 01:33:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:16.672 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.672 01:33:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 [2024-11-17 01:33:24.981926] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:16.672 [2024-11-17 01:33:24.982243] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:16.672 [2024-11-17 01:33:24.982257] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:16.672 [2024-11-17 01:33:24.982263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:16.672 [2024-11-17 01:33:24.989839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:16.672 [2024-11-17 01:33:24.989857] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:16.672 [2024-11-17 01:33:24.997822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:16.672 [2024-11-17 01:33:24.998346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:16.672 [2024-11-17 01:33:25.006845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:16.672 { 00:13:16.672 "ublk_device": "/dev/ublkb0", 00:13:16.672 "id": 0, 00:13:16.672 "queue_depth": 512, 00:13:16.672 "num_queues": 4, 00:13:16.672 "bdev_name": "Malloc0" 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "ublk_device": "/dev/ublkb1", 00:13:16.672 "id": 1, 00:13:16.672 "queue_depth": 512, 00:13:16.672 "num_queues": 4, 00:13:16.672 "bdev_name": "Malloc1" 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "ublk_device": "/dev/ublkb2", 00:13:16.672 "id": 2, 00:13:16.672 "queue_depth": 512, 00:13:16.672 "num_queues": 4, 00:13:16.672 "bdev_name": "Malloc2" 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "ublk_device": "/dev/ublkb3", 00:13:16.672 "id": 3, 00:13:16.672 "queue_depth": 512, 00:13:16.672 "num_queues": 4, 00:13:16.672 "bdev_name": "Malloc3" 00:13:16.672 } 00:13:16.672 ]' 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:16.672 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.929 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:17.187 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.447 [2024-11-17 01:33:25.701896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.447 [2024-11-17 01:33:25.734329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.447 [2024-11-17 01:33:25.735481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.447 [2024-11-17 01:33:25.741829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.447 [2024-11-17 01:33:25.742089] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:17.447 [2024-11-17 01:33:25.742103] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.447 [2024-11-17 01:33:25.757869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.447 [2024-11-17 01:33:25.791332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.447 [2024-11-17 01:33:25.792473] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.447 [2024-11-17 01:33:25.797820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.447 [2024-11-17 01:33:25.798068] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:17.447 [2024-11-17 01:33:25.798081] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.447 [2024-11-17 01:33:25.813888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.447 [2024-11-17 01:33:25.856337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.447 [2024-11-17 01:33:25.857400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.447 [2024-11-17 01:33:25.866848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.447 [2024-11-17 01:33:25.867087] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:17.447 [2024-11-17 01:33:25.867095] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.447 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.447 [2024-11-17 01:33:25.881881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:17.705 [2024-11-17 01:33:25.925848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:17.705 [2024-11-17 01:33:25.926506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:17.705 [2024-11-17 01:33:25.933823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:17.705 [2024-11-17 01:33:25.934070] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:17.705 [2024-11-17 01:33:25.934083] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:17.705 01:33:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.705 01:33:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:17.705 [2024-11-17 01:33:26.117858] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:17.705 [2024-11-17 01:33:26.125807] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:17.705 [2024-11-17 01:33:26.125835] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:17.705 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:17.705 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:17.705 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:17.705 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.705 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.273 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:18.273 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:18.273 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.273 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:18.532 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.532 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:18.532 01:33:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:18.532 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.532 01:33:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:18.791 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.791 01:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:18.791 01:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:18.791 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:19.134 ************************************ 00:13:19.134 END TEST test_create_multi_ublk 00:13:19.134 ************************************ 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:19.134 00:13:19.134 real 0m3.543s 00:13:19.134 user 0m0.807s 00:13:19.134 sys 0m0.162s 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.134 01:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.393 01:33:27 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:19.393 01:33:27 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:19.393 01:33:27 ublk -- ublk/ublk.sh@130 -- # killprocess 70869 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@954 -- # '[' -z 70869 ']' 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@958 -- # kill -0 70869 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@959 -- # uname 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70869 00:13:19.393 killing process with pid 70869 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70869' 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@973 -- # kill 70869 00:13:19.393 01:33:27 ublk -- common/autotest_common.sh@978 -- # wait 70869 00:13:19.960 [2024-11-17 01:33:28.200454] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:19.960 [2024-11-17 01:33:28.200658] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:20.528 00:13:20.528 real 0m25.065s 00:13:20.528 user 0m34.770s 00:13:20.528 sys 0m10.795s 00:13:20.528 01:33:28 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.528 ************************************ 00:13:20.528 END TEST ublk 00:13:20.528 ************************************ 00:13:20.528 01:33:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 01:33:28 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:20.528 01:33:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.528 01:33:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.528 01:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 ************************************ 00:13:20.528 START TEST ublk_recovery 00:13:20.528 ************************************ 00:13:20.528 01:33:28 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:20.787 * Looking for test storage... 00:13:20.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.787 01:33:29 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.787 --rc genhtml_branch_coverage=1 00:13:20.787 --rc genhtml_function_coverage=1 00:13:20.787 --rc genhtml_legend=1 00:13:20.787 --rc geninfo_all_blocks=1 00:13:20.787 --rc geninfo_unexecuted_blocks=1 00:13:20.787 00:13:20.787 ' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.787 --rc genhtml_branch_coverage=1 00:13:20.787 --rc genhtml_function_coverage=1 00:13:20.787 --rc genhtml_legend=1 00:13:20.787 --rc geninfo_all_blocks=1 00:13:20.787 --rc geninfo_unexecuted_blocks=1 00:13:20.787 00:13:20.787 ' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.787 --rc genhtml_branch_coverage=1 00:13:20.787 --rc genhtml_function_coverage=1 00:13:20.787 --rc genhtml_legend=1 00:13:20.787 --rc geninfo_all_blocks=1 00:13:20.787 --rc geninfo_unexecuted_blocks=1 00:13:20.787 00:13:20.787 ' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.787 --rc genhtml_branch_coverage=1 00:13:20.787 --rc genhtml_function_coverage=1 00:13:20.787 --rc genhtml_legend=1 00:13:20.787 --rc geninfo_all_blocks=1 00:13:20.787 --rc geninfo_unexecuted_blocks=1 00:13:20.787 00:13:20.787 ' 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:20.787 01:33:29 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71267 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71267 00:13:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.787 01:33:29 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71267 ']' 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.787 01:33:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.788 01:33:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.788 01:33:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.788 [2024-11-17 01:33:29.179241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:20.788 [2024-11-17 01:33:29.179394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:13:21.046 [2024-11-17 01:33:29.342504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:21.046 [2024-11-17 01:33:29.447442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.046 [2024-11-17 01:33:29.447520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:21.613 01:33:30 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.613 [2024-11-17 01:33:30.017813] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:21.613 [2024-11-17 01:33:30.019510] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.613 01:33:30 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.613 01:33:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.872 malloc0 00:13:21.872 01:33:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.872 01:33:30 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:21.872 01:33:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.872 01:33:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.872 [2024-11-17 01:33:30.113935] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:21.872 [2024-11-17 01:33:30.114023] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:21.872 [2024-11-17 01:33:30.114032] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:21.872 [2024-11-17 01:33:30.114041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:21.872 [2024-11-17 01:33:30.122907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:21.872 [2024-11-17 01:33:30.122926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:21.872 [2024-11-17 01:33:30.129824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:21.872 [2024-11-17 01:33:30.129951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:21.872 [2024-11-17 01:33:30.144817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:21.872 1 00:13:21.872 01:33:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.872 01:33:30 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:22.807 01:33:31 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:22.807 01:33:31 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71302 00:13:22.807 01:33:31 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:22.807 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:22.807 fio-3.35 00:13:22.807 Starting 1 process 00:13:28.140 01:33:36 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71267 00:13:28.140 01:33:36 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:33.424 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71267 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:33.424 01:33:41 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71412 00:13:33.424 01:33:41 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:33.424 01:33:41 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.424 01:33:41 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71412 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71412 ']' 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.424 01:33:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.424 [2024-11-17 01:33:41.258759] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:33.425 [2024-11-17 01:33:41.259212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71412 ] 00:13:33.425 [2024-11-17 01:33:41.428029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:33.425 [2024-11-17 01:33:41.577014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.425 [2024-11-17 01:33:41.577090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:33.996 01:33:42 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.996 [2024-11-17 01:33:42.383823] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:33.996 [2024-11-17 01:33:42.386487] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.996 01:33:42 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.996 01:33:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.256 malloc0 00:13:34.256 01:33:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.256 01:33:42 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:34.256 01:33:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.256 01:33:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.256 [2024-11-17 01:33:42.520009] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:34.256 [2024-11-17 01:33:42.520066] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:34.256 [2024-11-17 01:33:42.520078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:34.256 [2024-11-17 01:33:42.527873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:34.256 [2024-11-17 01:33:42.527911] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:34.256 1 00:13:34.256 01:33:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.256 01:33:42 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71302 00:13:35.193 [2024-11-17 01:33:43.527954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:35.193 [2024-11-17 01:33:43.535816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:35.193 [2024-11-17 01:33:43.535833] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:36.124 [2024-11-17 01:33:44.535857] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:36.125 [2024-11-17 01:33:44.542814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:36.125 [2024-11-17 01:33:44.542831] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:37.496 [2024-11-17 01:33:45.542852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:37.496 [2024-11-17 01:33:45.546822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:37.496 [2024-11-17 01:33:45.546832] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:37.496 [2024-11-17 01:33:45.546840] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:37.496 [2024-11-17 01:33:45.546911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:59.408 [2024-11-17 01:34:06.653827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:59.408 [2024-11-17 01:34:06.661160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:59.408 [2024-11-17 01:34:06.669014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:59.408 [2024-11-17 01:34:06.669096] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:25.941 00:14:25.941 fio_test: (groupid=0, jobs=1): err= 0: pid=71305: Sun Nov 17 01:34:31 2024 00:14:25.941 read: IOPS=13.5k, BW=52.6MiB/s (55.1MB/s)(3156MiB/60002msec) 00:14:25.941 slat (nsec): min=1101, max=293050, avg=5464.33, stdev=1461.24 00:14:25.941 clat (usec): min=1014, max=30519k, avg=4554.58, stdev=262984.65 00:14:25.941 lat (usec): min=1019, max=30519k, avg=4560.04, stdev=262984.65 00:14:25.941 clat percentiles (usec): 00:14:25.941 | 1.00th=[ 1860], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2057], 00:14:25.941 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:14:25.941 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3458], 00:14:25.941 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[ 8717], 00:14:25.941 | 99.99th=[13042] 00:14:25.941 bw ( KiB/s): min=23472, max=122632, per=100.00%, avg=107833.64, stdev=18094.23, samples=59 00:14:25.941 iops : min= 5868, max=30658, avg=26958.41, stdev=4523.57, samples=59 00:14:25.941 write: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(3151MiB/60002msec); 0 zone resets 00:14:25.941 slat (nsec): min=1174, max=190454, avg=5686.07, stdev=1357.43 00:14:25.941 clat (usec): min=1006, max=30519k, avg=4947.29, stdev=280163.35 00:14:25.941 lat (usec): min=1013, max=30519k, avg=4952.97, stdev=280163.34 00:14:25.941 clat percentiles (usec): 00:14:25.941 | 1.00th=[ 1909], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2147], 00:14:25.941 | 30.00th=[ 2212], 40.00th=[ 2212], 50.00th=[ 2245], 60.00th=[ 2278], 00:14:25.941 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3392], 00:14:25.941 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8717], 00:14:25.941 | 99.99th=[13173] 00:14:25.941 bw ( KiB/s): min=23712, max=122632, per=100.00%, avg=107655.37, stdev=18207.75, samples=59 00:14:25.941 iops : min= 5928, max=30658, avg=26913.83, stdev=4551.98, samples=59 00:14:25.941 lat (msec) : 2=7.64%, 4=88.56%, 10=3.76%, 20=0.03%, >=2000=0.01% 00:14:25.941 cpu : usr=3.04%, sys=15.40%, ctx=52555, majf=0, minf=15 00:14:25.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:25.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.941 issued rwts: total=807829,806701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.941 00:14:25.941 Run status group 0 (all jobs): 00:14:25.941 READ: bw=52.6MiB/s (55.1MB/s), 52.6MiB/s-52.6MiB/s (55.1MB/s-55.1MB/s), io=3156MiB (3309MB), run=60002-60002msec 00:14:25.941 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=3151MiB (3304MB), run=60002-60002msec 00:14:25.941 00:14:25.941 Disk stats (read/write): 00:14:25.941 ublkb1: ios=804828/803597, merge=0/0, ticks=3628137/3869388, in_queue=7497525, util=99.92% 00:14:25.941 01:34:31 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 [2024-11-17 01:34:31.402868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:25.941 [2024-11-17 01:34:31.437913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:25.941 [2024-11-17 01:34:31.438139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:25.941 [2024-11-17 01:34:31.447814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:25.941 [2024-11-17 01:34:31.447903] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:25.941 [2024-11-17 01:34:31.447910] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.941 01:34:31 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 [2024-11-17 01:34:31.462916] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:25.941 [2024-11-17 01:34:31.471800] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:25.941 [2024-11-17 01:34:31.471833] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.941 01:34:31 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:25.941 01:34:31 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:25.941 01:34:31 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71412 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71412 ']' 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71412 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71412 00:14:25.941 killing process with pid 71412 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71412' 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71412 00:14:25.941 01:34:31 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71412 00:14:25.941 [2024-11-17 01:34:32.559873] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:25.941 [2024-11-17 01:34:32.559924] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:25.941 ************************************ 00:14:25.941 END TEST ublk_recovery 00:14:25.941 ************************************ 00:14:25.941 00:14:25.941 real 1m4.372s 00:14:25.941 user 1m44.492s 00:14:25.941 sys 0m24.471s 00:14:25.941 01:34:33 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.941 01:34:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 01:34:33 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:14:25.941 01:34:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:14:25.941 01:34:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.941 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 01:34:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:14:25.941 01:34:33 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:25.941 01:34:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.941 01:34:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.941 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 ************************************ 00:14:25.941 START TEST ftl 00:14:25.941 ************************************ 00:14:25.941 01:34:33 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:25.941 * Looking for test storage... 00:14:25.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:25.941 01:34:33 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:25.941 01:34:33 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:14:25.941 01:34:33 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.941 01:34:33 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.941 01:34:33 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.941 01:34:33 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.941 01:34:33 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.941 01:34:33 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.941 01:34:33 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.941 01:34:33 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:25.941 01:34:33 ftl -- scripts/common.sh@345 -- # : 1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.941 01:34:33 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.941 01:34:33 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@353 -- # local d=1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.941 01:34:33 ftl -- scripts/common.sh@355 -- # echo 1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.941 01:34:33 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@353 -- # local d=2 00:14:25.941 01:34:33 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.942 01:34:33 ftl -- scripts/common.sh@355 -- # echo 2 00:14:25.942 01:34:33 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.942 01:34:33 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.942 01:34:33 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.942 01:34:33 ftl -- scripts/common.sh@368 -- # return 0 00:14:25.942 01:34:33 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.942 01:34:33 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.942 --rc genhtml_branch_coverage=1 00:14:25.942 --rc genhtml_function_coverage=1 00:14:25.942 --rc genhtml_legend=1 00:14:25.942 --rc geninfo_all_blocks=1 00:14:25.942 --rc geninfo_unexecuted_blocks=1 00:14:25.942 00:14:25.942 ' 00:14:25.942 01:34:33 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.942 --rc genhtml_branch_coverage=1 00:14:25.942 --rc genhtml_function_coverage=1 00:14:25.942 --rc genhtml_legend=1 00:14:25.942 --rc geninfo_all_blocks=1 00:14:25.942 --rc geninfo_unexecuted_blocks=1 00:14:25.942 00:14:25.942 ' 00:14:25.942 01:34:33 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.942 --rc genhtml_branch_coverage=1 00:14:25.942 --rc genhtml_function_coverage=1 00:14:25.942 --rc genhtml_legend=1 00:14:25.942 --rc geninfo_all_blocks=1 00:14:25.942 --rc geninfo_unexecuted_blocks=1 00:14:25.942 00:14:25.942 ' 00:14:25.942 01:34:33 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.942 --rc genhtml_branch_coverage=1 00:14:25.942 --rc genhtml_function_coverage=1 00:14:25.942 --rc genhtml_legend=1 00:14:25.942 --rc geninfo_all_blocks=1 00:14:25.942 --rc geninfo_unexecuted_blocks=1 00:14:25.942 00:14:25.942 ' 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:25.942 01:34:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:25.942 01:34:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:25.942 01:34:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:25.942 01:34:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:25.942 01:34:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:25.942 01:34:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.942 01:34:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.942 01:34:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.942 01:34:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:25.942 01:34:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:25.942 01:34:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:25.942 01:34:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:25.942 01:34:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.942 01:34:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.942 01:34:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:25.942 01:34:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:25.942 01:34:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:25.942 01:34:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:25.942 01:34:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:25.942 01:34:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:25.942 01:34:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:25.942 01:34:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:25.942 01:34:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:25.942 01:34:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:25.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.942 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.942 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.942 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.942 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.942 01:34:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72216 00:14:25.942 01:34:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:25.942 01:34:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72216 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@835 -- # '[' -z 72216 ']' 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.942 01:34:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:25.942 [2024-11-17 01:34:34.152379] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:25.942 [2024-11-17 01:34:34.152690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72216 ] 00:14:25.942 [2024-11-17 01:34:34.308921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.200 [2024-11-17 01:34:34.400606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.765 01:34:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.765 01:34:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:14:26.765 01:34:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:26.765 01:34:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:27.700 01:34:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:27.700 01:34:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:27.957 01:34:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:27.957 01:34:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:27.958 01:34:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@50 -- # break 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:28.215 01:34:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:28.472 01:34:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:28.472 01:34:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:28.472 01:34:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:28.473 01:34:36 ftl -- ftl/ftl.sh@63 -- # break 00:14:28.473 01:34:36 ftl -- ftl/ftl.sh@66 -- # killprocess 72216 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 72216 ']' 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@958 -- # kill -0 72216 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@959 -- # uname 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72216 00:14:28.473 killing process with pid 72216 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72216' 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@973 -- # kill 72216 00:14:28.473 01:34:36 ftl -- common/autotest_common.sh@978 -- # wait 72216 00:14:29.849 01:34:37 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:29.849 01:34:37 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:29.849 01:34:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.849 01:34:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.849 01:34:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:29.849 ************************************ 00:14:29.849 START TEST ftl_fio_basic 00:14:29.849 ************************************ 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:29.849 * Looking for test storage... 00:14:29.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.849 --rc genhtml_branch_coverage=1 00:14:29.849 --rc genhtml_function_coverage=1 00:14:29.849 --rc genhtml_legend=1 00:14:29.849 --rc geninfo_all_blocks=1 00:14:29.849 --rc geninfo_unexecuted_blocks=1 00:14:29.849 00:14:29.849 ' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.849 --rc genhtml_branch_coverage=1 00:14:29.849 --rc genhtml_function_coverage=1 00:14:29.849 --rc genhtml_legend=1 00:14:29.849 --rc geninfo_all_blocks=1 00:14:29.849 --rc geninfo_unexecuted_blocks=1 00:14:29.849 00:14:29.849 ' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.849 --rc genhtml_branch_coverage=1 00:14:29.849 --rc genhtml_function_coverage=1 00:14:29.849 --rc genhtml_legend=1 00:14:29.849 --rc geninfo_all_blocks=1 00:14:29.849 --rc geninfo_unexecuted_blocks=1 00:14:29.849 00:14:29.849 ' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.849 --rc genhtml_branch_coverage=1 00:14:29.849 --rc genhtml_function_coverage=1 00:14:29.849 --rc genhtml_legend=1 00:14:29.849 --rc geninfo_all_blocks=1 00:14:29.849 --rc geninfo_unexecuted_blocks=1 00:14:29.849 00:14:29.849 ' 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:29.849 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72343 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72343 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72343 ']' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:29.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.850 01:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:29.850 [2024-11-17 01:34:38.242489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:29.850 [2024-11-17 01:34:38.242691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72343 ] 00:14:30.111 [2024-11-17 01:34:38.407222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.111 [2024-11-17 01:34:38.551388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.111 [2024-11-17 01:34:38.551776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.111 [2024-11-17 01:34:38.551782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:31.052 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:31.312 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:31.571 { 00:14:31.571 "name": "nvme0n1", 00:14:31.571 "aliases": [ 00:14:31.571 "634cb783-e683-430b-8d8a-d127c4ec44c0" 00:14:31.571 ], 00:14:31.571 "product_name": "NVMe disk", 00:14:31.571 "block_size": 4096, 00:14:31.571 "num_blocks": 1310720, 00:14:31.571 "uuid": "634cb783-e683-430b-8d8a-d127c4ec44c0", 00:14:31.571 "numa_id": -1, 00:14:31.571 "assigned_rate_limits": { 00:14:31.571 "rw_ios_per_sec": 0, 00:14:31.571 "rw_mbytes_per_sec": 0, 00:14:31.571 "r_mbytes_per_sec": 0, 00:14:31.571 "w_mbytes_per_sec": 0 00:14:31.571 }, 00:14:31.571 "claimed": false, 00:14:31.571 "zoned": false, 00:14:31.571 "supported_io_types": { 00:14:31.571 "read": true, 00:14:31.571 "write": true, 00:14:31.571 "unmap": true, 00:14:31.571 "flush": true, 00:14:31.571 "reset": true, 00:14:31.571 "nvme_admin": true, 00:14:31.571 "nvme_io": true, 00:14:31.571 "nvme_io_md": false, 00:14:31.571 "write_zeroes": true, 00:14:31.571 "zcopy": false, 00:14:31.571 "get_zone_info": false, 00:14:31.571 "zone_management": false, 00:14:31.571 "zone_append": false, 00:14:31.571 "compare": true, 00:14:31.571 "compare_and_write": false, 00:14:31.571 "abort": true, 00:14:31.571 "seek_hole": false, 00:14:31.571 "seek_data": false, 00:14:31.571 "copy": true, 00:14:31.571 "nvme_iov_md": false 00:14:31.571 }, 00:14:31.571 "driver_specific": { 00:14:31.571 "nvme": [ 00:14:31.571 { 00:14:31.571 "pci_address": "0000:00:11.0", 00:14:31.571 "trid": { 00:14:31.571 "trtype": "PCIe", 00:14:31.571 "traddr": "0000:00:11.0" 00:14:31.571 }, 00:14:31.571 "ctrlr_data": { 00:14:31.571 "cntlid": 0, 00:14:31.571 "vendor_id": "0x1b36", 00:14:31.571 "model_number": "QEMU NVMe Ctrl", 00:14:31.571 "serial_number": "12341", 00:14:31.571 "firmware_revision": "8.0.0", 00:14:31.571 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:31.571 "oacs": { 00:14:31.571 "security": 0, 00:14:31.571 "format": 1, 00:14:31.571 "firmware": 0, 00:14:31.571 "ns_manage": 1 00:14:31.571 }, 00:14:31.571 "multi_ctrlr": false, 00:14:31.571 "ana_reporting": false 00:14:31.571 }, 00:14:31.571 "vs": { 00:14:31.571 "nvme_version": "1.4" 00:14:31.571 }, 00:14:31.571 "ns_data": { 00:14:31.571 "id": 1, 00:14:31.571 "can_share": false 00:14:31.571 } 00:14:31.571 } 00:14:31.571 ], 00:14:31.571 "mp_policy": "active_passive" 00:14:31.571 } 00:14:31.571 } 00:14:31.571 ]' 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:14:31.571 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:31.572 01:34:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:31.829 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:31.829 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0df028e4-17ed-4e2a-b2c0-7e7de83a22fb 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0df028e4-17ed-4e2a-b2c0-7e7de83a22fb 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.087 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:32.345 { 00:14:32.345 "name": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:32.345 "aliases": [ 00:14:32.345 "lvs/nvme0n1p0" 00:14:32.345 ], 00:14:32.345 "product_name": "Logical Volume", 00:14:32.345 "block_size": 4096, 00:14:32.345 "num_blocks": 26476544, 00:14:32.345 "uuid": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:32.345 "assigned_rate_limits": { 00:14:32.345 "rw_ios_per_sec": 0, 00:14:32.345 "rw_mbytes_per_sec": 0, 00:14:32.345 "r_mbytes_per_sec": 0, 00:14:32.345 "w_mbytes_per_sec": 0 00:14:32.345 }, 00:14:32.345 "claimed": false, 00:14:32.345 "zoned": false, 00:14:32.345 "supported_io_types": { 00:14:32.345 "read": true, 00:14:32.345 "write": true, 00:14:32.345 "unmap": true, 00:14:32.345 "flush": false, 00:14:32.345 "reset": true, 00:14:32.345 "nvme_admin": false, 00:14:32.345 "nvme_io": false, 00:14:32.345 "nvme_io_md": false, 00:14:32.345 "write_zeroes": true, 00:14:32.345 "zcopy": false, 00:14:32.345 "get_zone_info": false, 00:14:32.345 "zone_management": false, 00:14:32.345 "zone_append": false, 00:14:32.345 "compare": false, 00:14:32.345 "compare_and_write": false, 00:14:32.345 "abort": false, 00:14:32.345 "seek_hole": true, 00:14:32.345 "seek_data": true, 00:14:32.345 "copy": false, 00:14:32.345 "nvme_iov_md": false 00:14:32.345 }, 00:14:32.345 "driver_specific": { 00:14:32.345 "lvol": { 00:14:32.345 "lvol_store_uuid": "0df028e4-17ed-4e2a-b2c0-7e7de83a22fb", 00:14:32.345 "base_bdev": "nvme0n1", 00:14:32.345 "thin_provision": true, 00:14:32.345 "num_allocated_clusters": 0, 00:14:32.345 "snapshot": false, 00:14:32.345 "clone": false, 00:14:32.345 "esnap_clone": false 00:14:32.345 } 00:14:32.345 } 00:14:32.345 } 00:14:32.345 ]' 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:32.345 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:32.603 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:32.603 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:32.603 01:34:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.603 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.604 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:32.604 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:32.604 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:32.604 01:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:32.862 { 00:14:32.862 "name": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:32.862 "aliases": [ 00:14:32.862 "lvs/nvme0n1p0" 00:14:32.862 ], 00:14:32.862 "product_name": "Logical Volume", 00:14:32.862 "block_size": 4096, 00:14:32.862 "num_blocks": 26476544, 00:14:32.862 "uuid": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:32.862 "assigned_rate_limits": { 00:14:32.862 "rw_ios_per_sec": 0, 00:14:32.862 "rw_mbytes_per_sec": 0, 00:14:32.862 "r_mbytes_per_sec": 0, 00:14:32.862 "w_mbytes_per_sec": 0 00:14:32.862 }, 00:14:32.862 "claimed": false, 00:14:32.862 "zoned": false, 00:14:32.862 "supported_io_types": { 00:14:32.862 "read": true, 00:14:32.862 "write": true, 00:14:32.862 "unmap": true, 00:14:32.862 "flush": false, 00:14:32.862 "reset": true, 00:14:32.862 "nvme_admin": false, 00:14:32.862 "nvme_io": false, 00:14:32.862 "nvme_io_md": false, 00:14:32.862 "write_zeroes": true, 00:14:32.862 "zcopy": false, 00:14:32.862 "get_zone_info": false, 00:14:32.862 "zone_management": false, 00:14:32.862 "zone_append": false, 00:14:32.862 "compare": false, 00:14:32.862 "compare_and_write": false, 00:14:32.862 "abort": false, 00:14:32.862 "seek_hole": true, 00:14:32.862 "seek_data": true, 00:14:32.862 "copy": false, 00:14:32.862 "nvme_iov_md": false 00:14:32.862 }, 00:14:32.862 "driver_specific": { 00:14:32.862 "lvol": { 00:14:32.862 "lvol_store_uuid": "0df028e4-17ed-4e2a-b2c0-7e7de83a22fb", 00:14:32.862 "base_bdev": "nvme0n1", 00:14:32.862 "thin_provision": true, 00:14:32.862 "num_allocated_clusters": 0, 00:14:32.862 "snapshot": false, 00:14:32.862 "clone": false, 00:14:32.862 "esnap_clone": false 00:14:32.862 } 00:14:32.862 } 00:14:32.862 } 00:14:32.862 ]' 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:32.862 01:34:41 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:33.120 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:33.121 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:33.121 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30f8a871-4cf5-401a-b643-7832ec91dbe9 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:33.379 { 00:14:33.379 "name": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:33.379 "aliases": [ 00:14:33.379 "lvs/nvme0n1p0" 00:14:33.379 ], 00:14:33.379 "product_name": "Logical Volume", 00:14:33.379 "block_size": 4096, 00:14:33.379 "num_blocks": 26476544, 00:14:33.379 "uuid": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:33.379 "assigned_rate_limits": { 00:14:33.379 "rw_ios_per_sec": 0, 00:14:33.379 "rw_mbytes_per_sec": 0, 00:14:33.379 "r_mbytes_per_sec": 0, 00:14:33.379 "w_mbytes_per_sec": 0 00:14:33.379 }, 00:14:33.379 "claimed": false, 00:14:33.379 "zoned": false, 00:14:33.379 "supported_io_types": { 00:14:33.379 "read": true, 00:14:33.379 "write": true, 00:14:33.379 "unmap": true, 00:14:33.379 "flush": false, 00:14:33.379 "reset": true, 00:14:33.379 "nvme_admin": false, 00:14:33.379 "nvme_io": false, 00:14:33.379 "nvme_io_md": false, 00:14:33.379 "write_zeroes": true, 00:14:33.379 "zcopy": false, 00:14:33.379 "get_zone_info": false, 00:14:33.379 "zone_management": false, 00:14:33.379 "zone_append": false, 00:14:33.379 "compare": false, 00:14:33.379 "compare_and_write": false, 00:14:33.379 "abort": false, 00:14:33.379 "seek_hole": true, 00:14:33.379 "seek_data": true, 00:14:33.379 "copy": false, 00:14:33.379 "nvme_iov_md": false 00:14:33.379 }, 00:14:33.379 "driver_specific": { 00:14:33.379 "lvol": { 00:14:33.379 "lvol_store_uuid": "0df028e4-17ed-4e2a-b2c0-7e7de83a22fb", 00:14:33.379 "base_bdev": "nvme0n1", 00:14:33.379 "thin_provision": true, 00:14:33.379 "num_allocated_clusters": 0, 00:14:33.379 "snapshot": false, 00:14:33.379 "clone": false, 00:14:33.379 "esnap_clone": false 00:14:33.379 } 00:14:33.379 } 00:14:33.379 } 00:14:33.379 ]' 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:33.379 01:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 30f8a871-4cf5-401a-b643-7832ec91dbe9 -c nvc0n1p0 --l2p_dram_limit 60 00:14:33.641 [2024-11-17 01:34:41.895369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.895408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:33.641 [2024-11-17 01:34:41.895422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:33.641 [2024-11-17 01:34:41.895430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.895480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.895490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:33.641 [2024-11-17 01:34:41.895499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:14:33.641 [2024-11-17 01:34:41.895505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.895542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:33.641 [2024-11-17 01:34:41.896122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:33.641 [2024-11-17 01:34:41.896140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.896148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:33.641 [2024-11-17 01:34:41.896158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:14:33.641 [2024-11-17 01:34:41.896165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.896197] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88215db1-2f75-44b6-a7d7-b1d1a043e338 00:14:33.641 [2024-11-17 01:34:41.897473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.897589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:33.641 [2024-11-17 01:34:41.897603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:14:33.641 [2024-11-17 01:34:41.897612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.904381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.904486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:33.641 [2024-11-17 01:34:41.904498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:14:33.641 [2024-11-17 01:34:41.904508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.904593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.904604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:33.641 [2024-11-17 01:34:41.904613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:14:33.641 [2024-11-17 01:34:41.904624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.904673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.904682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:33.641 [2024-11-17 01:34:41.904688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:33.641 [2024-11-17 01:34:41.904695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.904718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:33.641 [2024-11-17 01:34:41.907917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.907942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:33.641 [2024-11-17 01:34:41.907952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:14:33.641 [2024-11-17 01:34:41.907960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.907999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.908006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:33.641 [2024-11-17 01:34:41.908015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:33.641 [2024-11-17 01:34:41.908021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.908044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:33.641 [2024-11-17 01:34:41.908164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:33.641 [2024-11-17 01:34:41.908178] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:33.641 [2024-11-17 01:34:41.908187] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:33.641 [2024-11-17 01:34:41.908197] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:33.641 [2024-11-17 01:34:41.908218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:33.641 [2024-11-17 01:34:41.908226] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:33.641 [2024-11-17 01:34:41.908232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:33.641 [2024-11-17 01:34:41.908240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.908247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:33.641 [2024-11-17 01:34:41.908255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:14:33.641 [2024-11-17 01:34:41.908261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.908335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.641 [2024-11-17 01:34:41.908342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:33.641 [2024-11-17 01:34:41.908359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:14:33.641 [2024-11-17 01:34:41.908365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.641 [2024-11-17 01:34:41.908463] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:33.641 [2024-11-17 01:34:41.908471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:33.641 [2024-11-17 01:34:41.908482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:33.641 [2024-11-17 01:34:41.908500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:33.641 [2024-11-17 01:34:41.908521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:33.641 [2024-11-17 01:34:41.908533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:33.641 [2024-11-17 01:34:41.908539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:33.641 [2024-11-17 01:34:41.908545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:33.641 [2024-11-17 01:34:41.908551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:33.641 [2024-11-17 01:34:41.908559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:33.641 [2024-11-17 01:34:41.908564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:33.641 [2024-11-17 01:34:41.908581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:33.641 [2024-11-17 01:34:41.908600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:33.641 [2024-11-17 01:34:41.908619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:33.641 [2024-11-17 01:34:41.908636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:33.641 [2024-11-17 01:34:41.908654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:33.641 [2024-11-17 01:34:41.908666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:33.641 [2024-11-17 01:34:41.908674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:33.641 [2024-11-17 01:34:41.908686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:33.641 [2024-11-17 01:34:41.908701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:33.641 [2024-11-17 01:34:41.908708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:33.641 [2024-11-17 01:34:41.908714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:33.641 [2024-11-17 01:34:41.908720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:33.641 [2024-11-17 01:34:41.908725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.641 [2024-11-17 01:34:41.908733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:33.641 [2024-11-17 01:34:41.908738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:33.641 [2024-11-17 01:34:41.908745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.642 [2024-11-17 01:34:41.908749] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:33.642 [2024-11-17 01:34:41.908757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:33.642 [2024-11-17 01:34:41.908762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:33.642 [2024-11-17 01:34:41.908769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.642 [2024-11-17 01:34:41.908776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:33.642 [2024-11-17 01:34:41.908785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:33.642 [2024-11-17 01:34:41.908808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:33.642 [2024-11-17 01:34:41.908816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:33.642 [2024-11-17 01:34:41.908821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:33.642 [2024-11-17 01:34:41.908828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:33.642 [2024-11-17 01:34:41.908836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:33.642 [2024-11-17 01:34:41.908845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:33.642 [2024-11-17 01:34:41.908859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:33.642 [2024-11-17 01:34:41.908865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:33.642 [2024-11-17 01:34:41.908872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:33.642 [2024-11-17 01:34:41.908879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:33.642 [2024-11-17 01:34:41.908886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:33.642 [2024-11-17 01:34:41.908891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:33.642 [2024-11-17 01:34:41.908898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:33.642 [2024-11-17 01:34:41.908904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:33.642 [2024-11-17 01:34:41.908915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:33.642 [2024-11-17 01:34:41.908946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:33.642 [2024-11-17 01:34:41.908954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:33.642 [2024-11-17 01:34:41.908969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:33.642 [2024-11-17 01:34:41.908974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:33.642 [2024-11-17 01:34:41.908983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:33.642 [2024-11-17 01:34:41.908989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.642 [2024-11-17 01:34:41.908996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:33.642 [2024-11-17 01:34:41.909002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:14:33.642 [2024-11-17 01:34:41.909019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.642 [2024-11-17 01:34:41.909088] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:33.642 [2024-11-17 01:34:41.909100] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:36.255 [2024-11-17 01:34:44.292330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.292397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:36.255 [2024-11-17 01:34:44.292417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2383.229 ms 00:14:36.255 [2024-11-17 01:34:44.292428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.320319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.320521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:36.255 [2024-11-17 01:34:44.320540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.670 ms 00:14:36.255 [2024-11-17 01:34:44.320551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.320680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.320694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:36.255 [2024-11-17 01:34:44.320703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:14:36.255 [2024-11-17 01:34:44.320715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.364870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.364924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:36.255 [2024-11-17 01:34:44.364945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.102 ms 00:14:36.255 [2024-11-17 01:34:44.364961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.365017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.365033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:36.255 [2024-11-17 01:34:44.365045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:36.255 [2024-11-17 01:34:44.365058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.365538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.365565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:36.255 [2024-11-17 01:34:44.365578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:14:36.255 [2024-11-17 01:34:44.365594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.365769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.365785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:36.255 [2024-11-17 01:34:44.365823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:14:36.255 [2024-11-17 01:34:44.365840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.383253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.383286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:36.255 [2024-11-17 01:34:44.383296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.372 ms 00:14:36.255 [2024-11-17 01:34:44.383306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.395644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:36.255 [2024-11-17 01:34:44.412680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.412891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:36.255 [2024-11-17 01:34:44.412913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.269 ms 00:14:36.255 [2024-11-17 01:34:44.412925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.462367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.462401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:36.255 [2024-11-17 01:34:44.462417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.407 ms 00:14:36.255 [2024-11-17 01:34:44.462426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.462614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.462625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:36.255 [2024-11-17 01:34:44.462637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:14:36.255 [2024-11-17 01:34:44.462645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.485447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.485481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:36.255 [2024-11-17 01:34:44.485494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.742 ms 00:14:36.255 [2024-11-17 01:34:44.485502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.507535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.507564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:36.255 [2024-11-17 01:34:44.507577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.990 ms 00:14:36.255 [2024-11-17 01:34:44.507584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.508212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.508234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:36.255 [2024-11-17 01:34:44.508245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:14:36.255 [2024-11-17 01:34:44.508252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.573849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.573875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:36.255 [2024-11-17 01:34:44.573891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.551 ms 00:14:36.255 [2024-11-17 01:34:44.573902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.598272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.598303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:36.255 [2024-11-17 01:34:44.598316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.280 ms 00:14:36.255 [2024-11-17 01:34:44.598324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.255 [2024-11-17 01:34:44.620404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.255 [2024-11-17 01:34:44.620432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:36.255 [2024-11-17 01:34:44.620444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.039 ms 00:14:36.256 [2024-11-17 01:34:44.620452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.256 [2024-11-17 01:34:44.643271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.256 [2024-11-17 01:34:44.643305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:36.256 [2024-11-17 01:34:44.643318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.775 ms 00:14:36.256 [2024-11-17 01:34:44.643326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.256 [2024-11-17 01:34:44.643373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.256 [2024-11-17 01:34:44.643383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:36.256 [2024-11-17 01:34:44.643396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:36.256 [2024-11-17 01:34:44.643406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.256 [2024-11-17 01:34:44.643493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.256 [2024-11-17 01:34:44.643504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:36.256 [2024-11-17 01:34:44.643513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:36.256 [2024-11-17 01:34:44.643520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.256 [2024-11-17 01:34:44.644539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2748.710 ms, result 0 00:14:36.256 { 00:14:36.256 "name": "ftl0", 00:14:36.256 "uuid": "88215db1-2f75-44b6-a7d7-b1d1a043e338" 00:14:36.256 } 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.256 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.516 01:34:44 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:36.776 [ 00:14:36.776 { 00:14:36.776 "name": "ftl0", 00:14:36.776 "aliases": [ 00:14:36.776 "88215db1-2f75-44b6-a7d7-b1d1a043e338" 00:14:36.776 ], 00:14:36.776 "product_name": "FTL disk", 00:14:36.776 "block_size": 4096, 00:14:36.776 "num_blocks": 20971520, 00:14:36.776 "uuid": "88215db1-2f75-44b6-a7d7-b1d1a043e338", 00:14:36.776 "assigned_rate_limits": { 00:14:36.776 "rw_ios_per_sec": 0, 00:14:36.776 "rw_mbytes_per_sec": 0, 00:14:36.776 "r_mbytes_per_sec": 0, 00:14:36.776 "w_mbytes_per_sec": 0 00:14:36.776 }, 00:14:36.776 "claimed": false, 00:14:36.776 "zoned": false, 00:14:36.776 "supported_io_types": { 00:14:36.776 "read": true, 00:14:36.776 "write": true, 00:14:36.776 "unmap": true, 00:14:36.776 "flush": true, 00:14:36.776 "reset": false, 00:14:36.776 "nvme_admin": false, 00:14:36.776 "nvme_io": false, 00:14:36.776 "nvme_io_md": false, 00:14:36.776 "write_zeroes": true, 00:14:36.776 "zcopy": false, 00:14:36.776 "get_zone_info": false, 00:14:36.776 "zone_management": false, 00:14:36.776 "zone_append": false, 00:14:36.776 "compare": false, 00:14:36.776 "compare_and_write": false, 00:14:36.776 "abort": false, 00:14:36.776 "seek_hole": false, 00:14:36.776 "seek_data": false, 00:14:36.776 "copy": false, 00:14:36.776 "nvme_iov_md": false 00:14:36.776 }, 00:14:36.776 "driver_specific": { 00:14:36.776 "ftl": { 00:14:36.776 "base_bdev": "30f8a871-4cf5-401a-b643-7832ec91dbe9", 00:14:36.776 "cache": "nvc0n1p0" 00:14:36.776 } 00:14:36.776 } 00:14:36.776 } 00:14:36.776 ] 00:14:36.776 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:14:36.776 01:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:36.776 01:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:37.035 01:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:37.035 01:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:37.035 [2024-11-17 01:34:45.465521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.035 [2024-11-17 01:34:45.465562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:37.035 [2024-11-17 01:34:45.465575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:37.035 [2024-11-17 01:34:45.465587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.035 [2024-11-17 01:34:45.465625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:37.035 [2024-11-17 01:34:45.468475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.035 [2024-11-17 01:34:45.468503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:37.035 [2024-11-17 01:34:45.468516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:14:37.035 [2024-11-17 01:34:45.468525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.035 [2024-11-17 01:34:45.469003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.035 [2024-11-17 01:34:45.469121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:37.035 [2024-11-17 01:34:45.469139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:14:37.035 [2024-11-17 01:34:45.469147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.035 [2024-11-17 01:34:45.472403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.035 [2024-11-17 01:34:45.472493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:37.035 [2024-11-17 01:34:45.472509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:14:37.035 [2024-11-17 01:34:45.472518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.035 [2024-11-17 01:34:45.477651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.035 [2024-11-17 01:34:45.477726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:37.035 [2024-11-17 01:34:45.477808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:14:37.035 [2024-11-17 01:34:45.477828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.293 [2024-11-17 01:34:45.496098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.293 [2024-11-17 01:34:45.496189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:37.294 [2024-11-17 01:34:45.496232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.186 ms 00:14:37.294 [2024-11-17 01:34:45.496249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.508412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.508503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:37.294 [2024-11-17 01:34:45.508546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.102 ms 00:14:37.294 [2024-11-17 01:34:45.508566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.508715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.508738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:37.294 [2024-11-17 01:34:45.508756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:14:37.294 [2024-11-17 01:34:45.508816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.526532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.526615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:37.294 [2024-11-17 01:34:45.526655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.672 ms 00:14:37.294 [2024-11-17 01:34:45.526672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.544103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.544186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:37.294 [2024-11-17 01:34:45.544225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:14:37.294 [2024-11-17 01:34:45.544242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.561624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.561707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:37.294 [2024-11-17 01:34:45.561747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.341 ms 00:14:37.294 [2024-11-17 01:34:45.561764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.579067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.294 [2024-11-17 01:34:45.579151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:37.294 [2024-11-17 01:34:45.579191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.206 ms 00:14:37.294 [2024-11-17 01:34:45.579207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.294 [2024-11-17 01:34:45.579248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:37.294 [2024-11-17 01:34:45.579271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.579979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.580994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:37.294 [2024-11-17 01:34:45.581564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.581981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:37.295 [2024-11-17 01:34:45.582661] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:37.295 [2024-11-17 01:34:45.582678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88215db1-2f75-44b6-a7d7-b1d1a043e338 00:14:37.295 [2024-11-17 01:34:45.582701] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:37.295 [2024-11-17 01:34:45.582718] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:37.295 [2024-11-17 01:34:45.582733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:37.295 [2024-11-17 01:34:45.582752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:37.295 [2024-11-17 01:34:45.582847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:37.295 [2024-11-17 01:34:45.582870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:37.295 [2024-11-17 01:34:45.582884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:37.295 [2024-11-17 01:34:45.582901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:37.295 [2024-11-17 01:34:45.582914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:37.295 [2024-11-17 01:34:45.582923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.295 [2024-11-17 01:34:45.582930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:37.295 [2024-11-17 01:34:45.582939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:14:37.295 [2024-11-17 01:34:45.582945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.593303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.295 [2024-11-17 01:34:45.593383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:37.295 [2024-11-17 01:34:45.593445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.317 ms 00:14:37.295 [2024-11-17 01:34:45.593463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.593773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.295 [2024-11-17 01:34:45.593858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:37.295 [2024-11-17 01:34:45.593904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:14:37.295 [2024-11-17 01:34:45.593921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.630399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.630487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:37.295 [2024-11-17 01:34:45.630530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.630547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.630618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.630635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:37.295 [2024-11-17 01:34:45.630652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.630667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.630827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.630857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:37.295 [2024-11-17 01:34:45.630878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.630893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.630964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.630990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:37.295 [2024-11-17 01:34:45.631007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.631023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.697256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.697373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:37.295 [2024-11-17 01:34:45.697418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.697438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.748446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.748558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:37.295 [2024-11-17 01:34:45.748600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.748619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.748734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.748755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:37.295 [2024-11-17 01:34:45.748773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.748800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.748927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.748950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:37.295 [2024-11-17 01:34:45.748967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.748983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.749163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.749183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:37.295 [2024-11-17 01:34:45.749201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.749251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.749318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.749337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:37.295 [2024-11-17 01:34:45.749354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.749369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.749444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.749500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:37.295 [2024-11-17 01:34:45.749539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.295 [2024-11-17 01:34:45.749558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.295 [2024-11-17 01:34:45.749624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.295 [2024-11-17 01:34:45.749672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:37.296 [2024-11-17 01:34:45.749692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.296 [2024-11-17 01:34:45.749708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.296 [2024-11-17 01:34:45.749923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.381 ms, result 0 00:14:37.553 true 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72343 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72343 ']' 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72343 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72343 00:14:37.553 killing process with pid 72343 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72343' 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72343 00:14:37.553 01:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72343 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:45.686 01:34:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:45.686 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:45.686 fio-3.35 00:14:45.686 Starting 1 thread 00:14:50.976 00:14:50.976 test: (groupid=0, jobs=1): err= 0: pid=72532: Sun Nov 17 01:34:59 2024 00:14:50.976 read: IOPS=822, BW=54.6MiB/s (57.3MB/s)(255MiB/4658msec) 00:14:50.976 slat (nsec): min=4297, max=35750, avg=7115.16, stdev=3627.68 00:14:50.976 clat (usec): min=259, max=1370, avg=545.96, stdev=234.43 00:14:50.976 lat (usec): min=264, max=1380, avg=553.07, stdev=236.43 00:14:50.976 clat percentiles (usec): 00:14:50.976 | 1.00th=[ 293], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:14:50.976 | 30.00th=[ 326], 40.00th=[ 404], 50.00th=[ 465], 60.00th=[ 570], 00:14:50.976 | 70.00th=[ 660], 80.00th=[ 840], 90.00th=[ 906], 95.00th=[ 930], 00:14:50.976 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1303], 99.95th=[ 1336], 00:14:50.976 | 99.99th=[ 1369] 00:14:50.976 write: IOPS=828, BW=55.0MiB/s (57.7MB/s)(256MiB/4653msec); 0 zone resets 00:14:50.976 slat (nsec): min=14695, max=69647, avg=21087.58, stdev=5502.45 00:14:50.976 clat (usec): min=286, max=1989, avg=623.56, stdev=286.98 00:14:50.976 lat (usec): min=311, max=2025, avg=644.65, stdev=290.13 00:14:50.976 clat percentiles (usec): 00:14:50.976 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 347], 00:14:50.976 | 30.00th=[ 363], 40.00th=[ 469], 50.00th=[ 594], 60.00th=[ 652], 00:14:50.976 | 70.00th=[ 758], 80.00th=[ 922], 90.00th=[ 979], 95.00th=[ 1045], 00:14:50.976 | 99.00th=[ 1663], 99.50th=[ 1729], 99.90th=[ 1893], 99.95th=[ 1958], 00:14:50.976 | 99.99th=[ 1991] 00:14:50.976 bw ( KiB/s): min=33320, max=91800, per=98.25%, avg=55367.11, stdev=21105.04, samples=9 00:14:50.976 iops : min= 490, max= 1350, avg=814.22, stdev=310.37, samples=9 00:14:50.976 lat (usec) : 500=47.68%, 750=24.00%, 1000=23.63% 00:14:50.976 lat (msec) : 2=4.70% 00:14:50.976 cpu : usr=99.03%, sys=0.17%, ctx=10, majf=0, minf=1169 00:14:50.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:50.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.976 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:50.976 00:14:50.976 Run status group 0 (all jobs): 00:14:50.976 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=255MiB (267MB), run=4658-4658msec 00:14:50.976 WRITE: bw=55.0MiB/s (57.7MB/s), 55.0MiB/s-55.0MiB/s (57.7MB/s-57.7MB/s), io=256MiB (269MB), run=4653-4653msec 00:14:52.360 ----------------------------------------------------- 00:14:52.360 Suppressions used: 00:14:52.360 count bytes template 00:14:52.360 1 5 /usr/src/fio/parse.c 00:14:52.360 1 8 libtcmalloc_minimal.so 00:14:52.360 1 904 libcrypto.so 00:14:52.360 ----------------------------------------------------- 00:14:52.360 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.360 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.620 01:35:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.620 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.620 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.620 fio-3.35 00:14:52.620 Starting 2 threads 00:15:19.167 00:15:19.167 first_half: (groupid=0, jobs=1): err= 0: pid=72641: Sun Nov 17 01:35:24 2024 00:15:19.167 read: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(255MiB/22368msec) 00:15:19.167 slat (nsec): min=2865, max=32791, avg=3887.04, stdev=1222.14 00:15:19.167 clat (usec): min=583, max=386450, avg=33509.10, stdev=19367.82 00:15:19.167 lat (usec): min=587, max=386454, avg=33512.99, stdev=19368.01 00:15:19.167 clat percentiles (msec): 00:15:19.167 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:15:19.167 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:19.167 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 43], 00:15:19.167 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 245], 99.95th=[ 330], 00:15:19.167 | 99.99th=[ 376] 00:15:19.167 write: IOPS=3438, BW=13.4MiB/s (14.1MB/s)(256MiB/19061msec); 0 zone resets 00:15:19.167 slat (usec): min=3, max=1630, avg= 5.90, stdev=13.84 00:15:19.167 clat (usec): min=339, max=76654, avg=10260.03, stdev=15880.00 00:15:19.167 lat (usec): min=345, max=76659, avg=10265.93, stdev=15880.21 00:15:19.167 clat percentiles (usec): 00:15:19.167 | 1.00th=[ 603], 5.00th=[ 717], 10.00th=[ 816], 20.00th=[ 1045], 00:15:19.167 | 30.00th=[ 2442], 40.00th=[ 3458], 50.00th=[ 4817], 60.00th=[ 5473], 00:15:19.167 | 70.00th=[ 6390], 80.00th=[12518], 90.00th=[27657], 95.00th=[55837], 00:15:19.167 | 99.00th=[64226], 99.50th=[65799], 99.90th=[69731], 99.95th=[74974], 00:15:19.167 | 99.99th=[76022] 00:15:19.167 bw ( KiB/s): min= 968, max=41904, per=90.77%, avg=24966.10, stdev=12172.55, samples=21 00:15:19.167 iops : min= 242, max=10476, avg=6241.52, stdev=3043.14, samples=21 00:15:19.167 lat (usec) : 500=0.03%, 750=3.27%, 1000=6.02% 00:15:19.168 lat (msec) : 2=4.64%, 4=8.73%, 10=16.48%, 20=7.14%, 50=47.43% 00:15:19.168 lat (msec) : 100=5.27%, 250=0.96%, 500=0.05% 00:15:19.168 cpu : usr=99.19%, sys=0.17%, ctx=43, majf=0, minf=5589 00:15:19.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:19.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.168 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.168 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.168 second_half: (groupid=0, jobs=1): err= 0: pid=72642: Sun Nov 17 01:35:24 2024 00:15:19.168 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(255MiB/22222msec) 00:15:19.168 slat (nsec): min=3040, max=39831, avg=5659.93, stdev=1580.39 00:15:19.168 clat (usec): min=539, max=387293, avg=33730.51, stdev=17102.01 00:15:19.168 lat (usec): min=544, max=387307, avg=33736.17, stdev=17102.19 00:15:19.168 clat percentiles (msec): 00:15:19.168 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:15:19.168 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:19.168 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 45], 00:15:19.168 | 99.00th=[ 125], 99.50th=[ 142], 99.90th=[ 199], 99.95th=[ 271], 00:15:19.168 | 99.99th=[ 384] 00:15:19.168 write: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(256MiB/17276msec); 0 zone resets 00:15:19.168 slat (usec): min=3, max=1114, avg= 6.93, stdev= 7.07 00:15:19.168 clat (usec): min=355, max=77469, avg=9825.24, stdev=15645.57 00:15:19.168 lat (usec): min=361, max=77475, avg=9832.17, stdev=15645.67 00:15:19.168 clat percentiles (usec): 00:15:19.168 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 840], 20.00th=[ 1057], 00:15:19.168 | 30.00th=[ 2278], 40.00th=[ 3720], 50.00th=[ 4752], 60.00th=[ 5407], 00:15:19.168 | 70.00th=[ 6063], 80.00th=[11338], 90.00th=[18744], 95.00th=[55837], 00:15:19.168 | 99.00th=[64226], 99.50th=[66323], 99.90th=[69731], 99.95th=[71828], 00:15:19.168 | 99.99th=[76022] 00:15:19.168 bw ( KiB/s): min= 8, max=44312, per=95.30%, avg=26214.40, stdev=12078.92, samples=20 00:15:19.168 iops : min= 2, max=11078, avg=6553.60, stdev=3019.73, samples=20 00:15:19.168 lat (usec) : 500=0.03%, 750=2.57%, 1000=6.44% 00:15:19.168 lat (msec) : 2=5.55%, 4=6.93%, 10=17.80%, 20=7.14%, 50=47.06% 00:15:19.168 lat (msec) : 100=5.68%, 250=0.76%, 500=0.03% 00:15:19.168 cpu : usr=99.35%, sys=0.10%, ctx=29, majf=0, minf=5538 00:15:19.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:19.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.168 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.168 issued rwts: total=65191,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.168 00:15:19.168 Run status group 0 (all jobs): 00:15:19.168 READ: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=509MiB (534MB), run=22222-22368msec 00:15:19.168 WRITE: bw=26.9MiB/s (28.2MB/s), 13.4MiB/s-14.8MiB/s (14.1MB/s-15.5MB/s), io=512MiB (537MB), run=17276-19061msec 00:15:19.168 ----------------------------------------------------- 00:15:19.168 Suppressions used: 00:15:19.168 count bytes template 00:15:19.168 2 10 /usr/src/fio/parse.c 00:15:19.168 2 192 /usr/src/fio/iolog.c 00:15:19.168 1 8 libtcmalloc_minimal.so 00:15:19.168 1 904 libcrypto.so 00:15:19.168 ----------------------------------------------------- 00:15:19.168 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.168 01:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:19.168 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:19.168 fio-3.35 00:15:19.168 Starting 1 thread 00:15:34.067 00:15:34.067 test: (groupid=0, jobs=1): err= 0: pid=72939: Sun Nov 17 01:35:41 2024 00:15:34.067 read: IOPS=8158, BW=31.9MiB/s (33.4MB/s)(255MiB/7992msec) 00:15:34.067 slat (nsec): min=2984, max=37455, avg=4729.83, stdev=1113.30 00:15:34.067 clat (usec): min=519, max=31736, avg=15679.52, stdev=1693.66 00:15:34.067 lat (usec): min=523, max=31741, avg=15684.25, stdev=1693.67 00:15:34.067 clat percentiles (usec): 00:15:34.067 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13829], 20.00th=[14484], 00:15:34.067 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:15:34.067 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16450], 95.00th=[17957], 00:15:34.067 | 99.00th=[24249], 99.50th=[24773], 99.90th=[26346], 99.95th=[27919], 00:15:34.067 | 99.99th=[30802] 00:15:34.067 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(256MiB/5113msec); 0 zone resets 00:15:34.067 slat (usec): min=4, max=175, avg= 7.02, stdev= 3.08 00:15:34.067 clat (usec): min=463, max=57448, avg=9946.02, stdev=10851.63 00:15:34.067 lat (usec): min=467, max=57454, avg=9953.04, stdev=10851.97 00:15:34.067 clat percentiles (usec): 00:15:34.067 | 1.00th=[ 611], 5.00th=[ 750], 10.00th=[ 840], 20.00th=[ 979], 00:15:34.067 | 30.00th=[ 1188], 40.00th=[ 1811], 50.00th=[ 5997], 60.00th=[ 7767], 00:15:34.067 | 70.00th=[14484], 80.00th=[19006], 90.00th=[26608], 95.00th=[31851], 00:15:34.067 | 99.00th=[40633], 99.50th=[46924], 99.90th=[54264], 99.95th=[55313], 00:15:34.067 | 99.99th=[56361] 00:15:34.067 bw ( KiB/s): min= 9928, max=70896, per=92.96%, avg=47662.55, stdev=19919.84, samples=11 00:15:34.067 iops : min= 2482, max=17724, avg=11915.64, stdev=4979.96, samples=11 00:15:34.067 lat (usec) : 500=0.01%, 750=2.47%, 1000=8.12% 00:15:34.067 lat (msec) : 2=9.71%, 4=0.75%, 10=11.29%, 20=57.18%, 50=10.34% 00:15:34.067 lat (msec) : 100=0.13% 00:15:34.067 cpu : usr=99.14%, sys=0.11%, ctx=106, majf=0, minf=5565 00:15:34.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:34.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.067 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.067 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.067 00:15:34.067 Run status group 0 (all jobs): 00:15:34.067 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=7992-7992msec 00:15:34.067 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=256MiB (268MB), run=5113-5113msec 00:15:34.639 ----------------------------------------------------- 00:15:34.639 Suppressions used: 00:15:34.639 count bytes template 00:15:34.639 1 5 /usr/src/fio/parse.c 00:15:34.639 2 192 /usr/src/fio/iolog.c 00:15:34.639 1 8 libtcmalloc_minimal.so 00:15:34.639 1 904 libcrypto.so 00:15:34.639 ----------------------------------------------------- 00:15:34.639 00:15:34.639 01:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:34.639 01:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.639 01:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 01:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:34.899 Remove shared memory files 00:15:34.899 01:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57183 /dev/shm/spdk_tgt_trace.pid71267 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:34.900 ************************************ 00:15:34.900 END TEST ftl_fio_basic 00:15:34.900 ************************************ 00:15:34.900 00:15:34.900 real 1m5.105s 00:15:34.900 user 2m16.328s 00:15:34.900 sys 0m3.191s 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.900 01:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:34.900 01:35:43 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:34.900 01:35:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:34.900 01:35:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.900 01:35:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:34.900 ************************************ 00:15:34.900 START TEST ftl_bdevperf 00:15:34.900 ************************************ 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:34.900 * Looking for test storage... 00:15:34.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.900 --rc genhtml_branch_coverage=1 00:15:34.900 --rc genhtml_function_coverage=1 00:15:34.900 --rc genhtml_legend=1 00:15:34.900 --rc geninfo_all_blocks=1 00:15:34.900 --rc geninfo_unexecuted_blocks=1 00:15:34.900 00:15:34.900 ' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.900 --rc genhtml_branch_coverage=1 00:15:34.900 --rc genhtml_function_coverage=1 00:15:34.900 --rc genhtml_legend=1 00:15:34.900 --rc geninfo_all_blocks=1 00:15:34.900 --rc geninfo_unexecuted_blocks=1 00:15:34.900 00:15:34.900 ' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.900 --rc genhtml_branch_coverage=1 00:15:34.900 --rc genhtml_function_coverage=1 00:15:34.900 --rc genhtml_legend=1 00:15:34.900 --rc geninfo_all_blocks=1 00:15:34.900 --rc geninfo_unexecuted_blocks=1 00:15:34.900 00:15:34.900 ' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.900 --rc genhtml_branch_coverage=1 00:15:34.900 --rc genhtml_function_coverage=1 00:15:34.900 --rc genhtml_legend=1 00:15:34.900 --rc geninfo_all_blocks=1 00:15:34.900 --rc geninfo_unexecuted_blocks=1 00:15:34.900 00:15:34.900 ' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73177 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73177 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73177 ']' 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.900 01:35:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:35.159 [2024-11-17 01:35:43.419662] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:35.159 [2024-11-17 01:35:43.420064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:15:35.159 [2024-11-17 01:35:43.579092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.417 [2024-11-17 01:35:43.682355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:35.982 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:36.241 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:36.500 { 00:15:36.500 "name": "nvme0n1", 00:15:36.500 "aliases": [ 00:15:36.500 "21746098-a956-4ea5-a388-b104de47fef0" 00:15:36.500 ], 00:15:36.500 "product_name": "NVMe disk", 00:15:36.500 "block_size": 4096, 00:15:36.500 "num_blocks": 1310720, 00:15:36.500 "uuid": "21746098-a956-4ea5-a388-b104de47fef0", 00:15:36.500 "numa_id": -1, 00:15:36.500 "assigned_rate_limits": { 00:15:36.500 "rw_ios_per_sec": 0, 00:15:36.500 "rw_mbytes_per_sec": 0, 00:15:36.500 "r_mbytes_per_sec": 0, 00:15:36.500 "w_mbytes_per_sec": 0 00:15:36.500 }, 00:15:36.500 "claimed": true, 00:15:36.500 "claim_type": "read_many_write_one", 00:15:36.500 "zoned": false, 00:15:36.500 "supported_io_types": { 00:15:36.500 "read": true, 00:15:36.500 "write": true, 00:15:36.500 "unmap": true, 00:15:36.500 "flush": true, 00:15:36.500 "reset": true, 00:15:36.500 "nvme_admin": true, 00:15:36.500 "nvme_io": true, 00:15:36.500 "nvme_io_md": false, 00:15:36.500 "write_zeroes": true, 00:15:36.500 "zcopy": false, 00:15:36.500 "get_zone_info": false, 00:15:36.500 "zone_management": false, 00:15:36.500 "zone_append": false, 00:15:36.500 "compare": true, 00:15:36.500 "compare_and_write": false, 00:15:36.500 "abort": true, 00:15:36.500 "seek_hole": false, 00:15:36.500 "seek_data": false, 00:15:36.500 "copy": true, 00:15:36.500 "nvme_iov_md": false 00:15:36.500 }, 00:15:36.500 "driver_specific": { 00:15:36.500 "nvme": [ 00:15:36.500 { 00:15:36.500 "pci_address": "0000:00:11.0", 00:15:36.500 "trid": { 00:15:36.500 "trtype": "PCIe", 00:15:36.500 "traddr": "0000:00:11.0" 00:15:36.500 }, 00:15:36.500 "ctrlr_data": { 00:15:36.500 "cntlid": 0, 00:15:36.500 "vendor_id": "0x1b36", 00:15:36.500 "model_number": "QEMU NVMe Ctrl", 00:15:36.500 "serial_number": "12341", 00:15:36.500 "firmware_revision": "8.0.0", 00:15:36.500 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:36.500 "oacs": { 00:15:36.500 "security": 0, 00:15:36.500 "format": 1, 00:15:36.500 "firmware": 0, 00:15:36.500 "ns_manage": 1 00:15:36.500 }, 00:15:36.500 "multi_ctrlr": false, 00:15:36.500 "ana_reporting": false 00:15:36.500 }, 00:15:36.500 "vs": { 00:15:36.500 "nvme_version": "1.4" 00:15:36.500 }, 00:15:36.500 "ns_data": { 00:15:36.500 "id": 1, 00:15:36.500 "can_share": false 00:15:36.500 } 00:15:36.500 } 00:15:36.500 ], 00:15:36.500 "mp_policy": "active_passive" 00:15:36.500 } 00:15:36.500 } 00:15:36.500 ]' 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:36.500 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:36.759 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0df028e4-17ed-4e2a-b2c0-7e7de83a22fb 00:15:36.759 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:36.759 01:35:44 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0df028e4-17ed-4e2a-b2c0-7e7de83a22fb 00:15:36.759 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:37.017 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c2722f28-c539-4ebe-96f7-9a5edf7bf65c 00:15:37.017 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c2722f28-c539-4ebe-96f7-9a5edf7bf65c 00:15:37.274 01:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.274 01:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.274 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:37.274 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:37.275 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:37.533 { 00:15:37.533 "name": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:37.533 "aliases": [ 00:15:37.533 "lvs/nvme0n1p0" 00:15:37.533 ], 00:15:37.533 "product_name": "Logical Volume", 00:15:37.533 "block_size": 4096, 00:15:37.533 "num_blocks": 26476544, 00:15:37.533 "uuid": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:37.533 "assigned_rate_limits": { 00:15:37.533 "rw_ios_per_sec": 0, 00:15:37.533 "rw_mbytes_per_sec": 0, 00:15:37.533 "r_mbytes_per_sec": 0, 00:15:37.533 "w_mbytes_per_sec": 0 00:15:37.533 }, 00:15:37.533 "claimed": false, 00:15:37.533 "zoned": false, 00:15:37.533 "supported_io_types": { 00:15:37.533 "read": true, 00:15:37.533 "write": true, 00:15:37.533 "unmap": true, 00:15:37.533 "flush": false, 00:15:37.533 "reset": true, 00:15:37.533 "nvme_admin": false, 00:15:37.533 "nvme_io": false, 00:15:37.533 "nvme_io_md": false, 00:15:37.533 "write_zeroes": true, 00:15:37.533 "zcopy": false, 00:15:37.533 "get_zone_info": false, 00:15:37.533 "zone_management": false, 00:15:37.533 "zone_append": false, 00:15:37.533 "compare": false, 00:15:37.533 "compare_and_write": false, 00:15:37.533 "abort": false, 00:15:37.533 "seek_hole": true, 00:15:37.533 "seek_data": true, 00:15:37.533 "copy": false, 00:15:37.533 "nvme_iov_md": false 00:15:37.533 }, 00:15:37.533 "driver_specific": { 00:15:37.533 "lvol": { 00:15:37.533 "lvol_store_uuid": "c2722f28-c539-4ebe-96f7-9a5edf7bf65c", 00:15:37.533 "base_bdev": "nvme0n1", 00:15:37.533 "thin_provision": true, 00:15:37.533 "num_allocated_clusters": 0, 00:15:37.533 "snapshot": false, 00:15:37.533 "clone": false, 00:15:37.533 "esnap_clone": false 00:15:37.533 } 00:15:37.533 } 00:15:37.533 } 00:15:37.533 ]' 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:37.533 01:35:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:37.792 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.050 { 00:15:38.050 "name": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:38.050 "aliases": [ 00:15:38.050 "lvs/nvme0n1p0" 00:15:38.050 ], 00:15:38.050 "product_name": "Logical Volume", 00:15:38.050 "block_size": 4096, 00:15:38.050 "num_blocks": 26476544, 00:15:38.050 "uuid": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:38.050 "assigned_rate_limits": { 00:15:38.050 "rw_ios_per_sec": 0, 00:15:38.050 "rw_mbytes_per_sec": 0, 00:15:38.050 "r_mbytes_per_sec": 0, 00:15:38.050 "w_mbytes_per_sec": 0 00:15:38.050 }, 00:15:38.050 "claimed": false, 00:15:38.050 "zoned": false, 00:15:38.050 "supported_io_types": { 00:15:38.050 "read": true, 00:15:38.050 "write": true, 00:15:38.050 "unmap": true, 00:15:38.050 "flush": false, 00:15:38.050 "reset": true, 00:15:38.050 "nvme_admin": false, 00:15:38.050 "nvme_io": false, 00:15:38.050 "nvme_io_md": false, 00:15:38.050 "write_zeroes": true, 00:15:38.050 "zcopy": false, 00:15:38.050 "get_zone_info": false, 00:15:38.050 "zone_management": false, 00:15:38.050 "zone_append": false, 00:15:38.050 "compare": false, 00:15:38.050 "compare_and_write": false, 00:15:38.050 "abort": false, 00:15:38.050 "seek_hole": true, 00:15:38.050 "seek_data": true, 00:15:38.050 "copy": false, 00:15:38.050 "nvme_iov_md": false 00:15:38.050 }, 00:15:38.050 "driver_specific": { 00:15:38.050 "lvol": { 00:15:38.050 "lvol_store_uuid": "c2722f28-c539-4ebe-96f7-9a5edf7bf65c", 00:15:38.050 "base_bdev": "nvme0n1", 00:15:38.050 "thin_provision": true, 00:15:38.050 "num_allocated_clusters": 0, 00:15:38.050 "snapshot": false, 00:15:38.050 "clone": false, 00:15:38.050 "esnap_clone": false 00:15:38.050 } 00:15:38.050 } 00:15:38.050 } 00:15:38.050 ]' 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:38.050 01:35:46 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:38.308 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 00:15:38.309 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.309 { 00:15:38.309 "name": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:38.309 "aliases": [ 00:15:38.309 "lvs/nvme0n1p0" 00:15:38.309 ], 00:15:38.309 "product_name": "Logical Volume", 00:15:38.309 "block_size": 4096, 00:15:38.309 "num_blocks": 26476544, 00:15:38.309 "uuid": "0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194", 00:15:38.309 "assigned_rate_limits": { 00:15:38.309 "rw_ios_per_sec": 0, 00:15:38.309 "rw_mbytes_per_sec": 0, 00:15:38.309 "r_mbytes_per_sec": 0, 00:15:38.309 "w_mbytes_per_sec": 0 00:15:38.309 }, 00:15:38.309 "claimed": false, 00:15:38.309 "zoned": false, 00:15:38.309 "supported_io_types": { 00:15:38.309 "read": true, 00:15:38.309 "write": true, 00:15:38.309 "unmap": true, 00:15:38.309 "flush": false, 00:15:38.309 "reset": true, 00:15:38.309 "nvme_admin": false, 00:15:38.309 "nvme_io": false, 00:15:38.309 "nvme_io_md": false, 00:15:38.309 "write_zeroes": true, 00:15:38.309 "zcopy": false, 00:15:38.309 "get_zone_info": false, 00:15:38.309 "zone_management": false, 00:15:38.309 "zone_append": false, 00:15:38.309 "compare": false, 00:15:38.309 "compare_and_write": false, 00:15:38.309 "abort": false, 00:15:38.309 "seek_hole": true, 00:15:38.309 "seek_data": true, 00:15:38.309 "copy": false, 00:15:38.309 "nvme_iov_md": false 00:15:38.309 }, 00:15:38.309 "driver_specific": { 00:15:38.309 "lvol": { 00:15:38.309 "lvol_store_uuid": "c2722f28-c539-4ebe-96f7-9a5edf7bf65c", 00:15:38.309 "base_bdev": "nvme0n1", 00:15:38.309 "thin_provision": true, 00:15:38.309 "num_allocated_clusters": 0, 00:15:38.309 "snapshot": false, 00:15:38.309 "clone": false, 00:15:38.309 "esnap_clone": false 00:15:38.309 } 00:15:38.309 } 00:15:38.309 } 00:15:38.309 ]' 00:15:38.309 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:38.568 01:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0436fa8b-13a3-4dd3-a0fe-62dd8d9ca194 -c nvc0n1p0 --l2p_dram_limit 20 00:15:38.568 [2024-11-17 01:35:46.985194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.985241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:38.568 [2024-11-17 01:35:46.985253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:38.568 [2024-11-17 01:35:46.985262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.985301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.985314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:38.568 [2024-11-17 01:35:46.985321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:38.568 [2024-11-17 01:35:46.985328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.985342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:38.568 [2024-11-17 01:35:46.985891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:38.568 [2024-11-17 01:35:46.985911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.985920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:38.568 [2024-11-17 01:35:46.985928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:15:38.568 [2024-11-17 01:35:46.985936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.985958] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0bacc49e-58b7-4576-8b83-fefdfc8718a1 00:15:38.568 [2024-11-17 01:35:46.987249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.987281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:38.568 [2024-11-17 01:35:46.987291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:38.568 [2024-11-17 01:35:46.987301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.994148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.994175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:38.568 [2024-11-17 01:35:46.994184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:15:38.568 [2024-11-17 01:35:46.994190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.994295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.994303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:38.568 [2024-11-17 01:35:46.994314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:38.568 [2024-11-17 01:35:46.994320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.994354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.994361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:38.568 [2024-11-17 01:35:46.994368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:38.568 [2024-11-17 01:35:46.994374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.994390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:38.568 [2024-11-17 01:35:46.997598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.997629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:38.568 [2024-11-17 01:35:46.997637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:15:38.568 [2024-11-17 01:35:46.997645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.997672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.568 [2024-11-17 01:35:46.997680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:38.568 [2024-11-17 01:35:46.997687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:38.568 [2024-11-17 01:35:46.997695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.568 [2024-11-17 01:35:46.997712] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:38.568 [2024-11-17 01:35:46.997840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:38.569 [2024-11-17 01:35:46.997851] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:38.569 [2024-11-17 01:35:46.997862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:38.569 [2024-11-17 01:35:46.997871] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:38.569 [2024-11-17 01:35:46.997880] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:38.569 [2024-11-17 01:35:46.997887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:38.569 [2024-11-17 01:35:46.997895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:38.569 [2024-11-17 01:35:46.997901] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:38.569 [2024-11-17 01:35:46.997911] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:38.569 [2024-11-17 01:35:46.997917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.569 [2024-11-17 01:35:46.997926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:38.569 [2024-11-17 01:35:46.997932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:15:38.569 [2024-11-17 01:35:46.997939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.569 [2024-11-17 01:35:46.998001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.569 [2024-11-17 01:35:46.998010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:38.569 [2024-11-17 01:35:46.998017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:38.569 [2024-11-17 01:35:46.998025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.569 [2024-11-17 01:35:46.998094] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:38.569 [2024-11-17 01:35:46.998103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:38.569 [2024-11-17 01:35:46.998111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:38.569 [2024-11-17 01:35:46.998130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:38.569 [2024-11-17 01:35:46.998147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:38.569 [2024-11-17 01:35:46.998162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:38.569 [2024-11-17 01:35:46.998169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:38.569 [2024-11-17 01:35:46.998174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:38.569 [2024-11-17 01:35:46.998187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:38.569 [2024-11-17 01:35:46.998192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:38.569 [2024-11-17 01:35:46.998200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:38.569 [2024-11-17 01:35:46.998212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:38.569 [2024-11-17 01:35:46.998230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:38.569 [2024-11-17 01:35:46.998249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:38.569 [2024-11-17 01:35:46.998265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:38.569 [2024-11-17 01:35:46.998283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:38.569 [2024-11-17 01:35:46.998302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:38.569 [2024-11-17 01:35:46.998314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:38.569 [2024-11-17 01:35:46.998320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:38.569 [2024-11-17 01:35:46.998325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:38.569 [2024-11-17 01:35:46.998331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:38.569 [2024-11-17 01:35:46.998336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:38.569 [2024-11-17 01:35:46.998342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:38.569 [2024-11-17 01:35:46.998354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:38.569 [2024-11-17 01:35:46.998359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:38.569 [2024-11-17 01:35:46.998372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:38.569 [2024-11-17 01:35:46.998379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.569 [2024-11-17 01:35:46.998394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:38.569 [2024-11-17 01:35:46.998400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:38.569 [2024-11-17 01:35:46.998407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:38.569 [2024-11-17 01:35:46.998413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:38.569 [2024-11-17 01:35:46.998420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:38.569 [2024-11-17 01:35:46.998425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:38.569 [2024-11-17 01:35:46.998436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:38.569 [2024-11-17 01:35:46.998443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:38.569 [2024-11-17 01:35:46.998458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:38.569 [2024-11-17 01:35:46.998465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:38.569 [2024-11-17 01:35:46.998471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:38.569 [2024-11-17 01:35:46.998479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:38.569 [2024-11-17 01:35:46.998485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:38.569 [2024-11-17 01:35:46.998491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:38.569 [2024-11-17 01:35:46.998497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:38.569 [2024-11-17 01:35:46.998506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:38.569 [2024-11-17 01:35:46.998511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:38.569 [2024-11-17 01:35:46.998543] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:38.569 [2024-11-17 01:35:46.998550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:38.569 [2024-11-17 01:35:46.998564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:38.569 [2024-11-17 01:35:46.998570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:38.569 [2024-11-17 01:35:46.998576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:38.569 [2024-11-17 01:35:46.998584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.569 [2024-11-17 01:35:46.998592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:38.569 [2024-11-17 01:35:46.998598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:15:38.569 [2024-11-17 01:35:46.998604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.570 [2024-11-17 01:35:46.998643] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:38.570 [2024-11-17 01:35:46.998651] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:42.824 [2024-11-17 01:35:50.680785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.680843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:42.824 [2024-11-17 01:35:50.680862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3682.126 ms 00:15:42.824 [2024-11-17 01:35:50.680870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.704946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.704988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.824 [2024-11-17 01:35:50.705001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.797 ms 00:15:42.824 [2024-11-17 01:35:50.705008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.705107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.705115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:42.824 [2024-11-17 01:35:50.705127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:42.824 [2024-11-17 01:35:50.705133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.744395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.744431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.824 [2024-11-17 01:35:50.744445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.235 ms 00:15:42.824 [2024-11-17 01:35:50.744452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.744482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.744492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.824 [2024-11-17 01:35:50.744500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.824 [2024-11-17 01:35:50.744506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.744959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.744975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.824 [2024-11-17 01:35:50.744984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:15:42.824 [2024-11-17 01:35:50.744990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.745081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.745089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.824 [2024-11-17 01:35:50.745099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:15:42.824 [2024-11-17 01:35:50.745105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.757085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.757109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.824 [2024-11-17 01:35:50.757118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.964 ms 00:15:42.824 [2024-11-17 01:35:50.757125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.766892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:42.824 [2024-11-17 01:35:50.772320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.772483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:42.824 [2024-11-17 01:35:50.772496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.140 ms 00:15:42.824 [2024-11-17 01:35:50.772504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.850677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.850710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:42.824 [2024-11-17 01:35:50.850720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.154 ms 00:15:42.824 [2024-11-17 01:35:50.850728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.850884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.850897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:42.824 [2024-11-17 01:35:50.850904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:15:42.824 [2024-11-17 01:35:50.850913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.869205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.869234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:42.824 [2024-11-17 01:35:50.869244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:15:42.824 [2024-11-17 01:35:50.869252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.887290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.887318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:42.824 [2024-11-17 01:35:50.887327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.011 ms 00:15:42.824 [2024-11-17 01:35:50.887334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.887807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.887819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:42.824 [2024-11-17 01:35:50.887827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:15:42.824 [2024-11-17 01:35:50.887834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.824 [2024-11-17 01:35:50.952675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.824 [2024-11-17 01:35:50.952707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:42.824 [2024-11-17 01:35:50.952715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.819 ms 00:15:42.825 [2024-11-17 01:35:50.952723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:50.971983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.825 [2024-11-17 01:35:50.972011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:42.825 [2024-11-17 01:35:50.972020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.202 ms 00:15:42.825 [2024-11-17 01:35:50.972030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:50.990222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.825 [2024-11-17 01:35:50.990342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:42.825 [2024-11-17 01:35:50.990355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.166 ms 00:15:42.825 [2024-11-17 01:35:50.990364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:51.009430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.825 [2024-11-17 01:35:51.009540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:42.825 [2024-11-17 01:35:51.009553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.042 ms 00:15:42.825 [2024-11-17 01:35:51.009560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:51.009587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.825 [2024-11-17 01:35:51.009599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:42.825 [2024-11-17 01:35:51.009605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:42.825 [2024-11-17 01:35:51.009613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:51.009677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.825 [2024-11-17 01:35:51.009686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:42.825 [2024-11-17 01:35:51.009693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:42.825 [2024-11-17 01:35:51.009701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.825 [2024-11-17 01:35:51.010761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4025.184 ms, result 0 00:15:42.825 { 00:15:42.825 "name": "ftl0", 00:15:42.825 "uuid": "0bacc49e-58b7-4576-8b83-fefdfc8718a1" 00:15:42.825 } 00:15:42.825 01:35:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:42.825 01:35:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:42.825 01:35:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:42.825 01:35:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:43.084 [2024-11-17 01:35:51.358703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:43.084 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:43.084 Zero copy mechanism will not be used. 00:15:43.084 Running I/O for 4 seconds... 00:15:44.951 747.00 IOPS, 49.61 MiB/s [2024-11-17T01:35:54.785Z] 763.50 IOPS, 50.70 MiB/s [2024-11-17T01:35:55.723Z] 764.33 IOPS, 50.76 MiB/s [2024-11-17T01:35:55.723Z] 765.00 IOPS, 50.80 MiB/s 00:15:47.264 Latency(us) 00:15:47.264 [2024-11-17T01:35:55.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.264 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:47.264 ftl0 : 4.00 764.92 50.80 0.00 0.00 1391.88 428.50 2810.49 00:15:47.264 [2024-11-17T01:35:55.723Z] =================================================================================================================== 00:15:47.264 [2024-11-17T01:35:55.723Z] Total : 764.92 50.80 0.00 0.00 1391.88 428.50 2810.49 00:15:47.264 [2024-11-17 01:35:55.367378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:47.264 { 00:15:47.264 "results": [ 00:15:47.264 { 00:15:47.264 "job": "ftl0", 00:15:47.264 "core_mask": "0x1", 00:15:47.264 "workload": "randwrite", 00:15:47.264 "status": "finished", 00:15:47.264 "queue_depth": 1, 00:15:47.264 "io_size": 69632, 00:15:47.264 "runtime": 4.001749, 00:15:47.264 "iops": 764.9155406798377, 00:15:47.264 "mibps": 50.795172623270474, 00:15:47.264 "io_failed": 0, 00:15:47.264 "io_timeout": 0, 00:15:47.264 "avg_latency_us": 1391.8829301635965, 00:15:47.264 "min_latency_us": 428.50461538461536, 00:15:47.264 "max_latency_us": 2810.4861538461537 00:15:47.264 } 00:15:47.264 ], 00:15:47.264 "core_count": 1 00:15:47.264 } 00:15:47.264 01:35:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:47.264 [2024-11-17 01:35:55.462639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:47.264 Running I/O for 4 seconds... 00:15:49.146 6404.00 IOPS, 25.02 MiB/s [2024-11-17T01:35:58.542Z] 5736.50 IOPS, 22.41 MiB/s [2024-11-17T01:35:59.483Z] 5640.00 IOPS, 22.03 MiB/s [2024-11-17T01:35:59.743Z] 5554.50 IOPS, 21.70 MiB/s 00:15:51.284 Latency(us) 00:15:51.284 [2024-11-17T01:35:59.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.284 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.284 ftl0 : 4.03 5539.63 21.64 0.00 0.00 23005.56 523.03 42547.99 00:15:51.284 [2024-11-17T01:35:59.743Z] =================================================================================================================== 00:15:51.284 [2024-11-17T01:35:59.743Z] Total : 5539.63 21.64 0.00 0.00 23005.56 0.00 42547.99 00:15:51.284 [2024-11-17 01:35:59.502524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:51.284 { 00:15:51.284 "results": [ 00:15:51.284 { 00:15:51.284 "job": "ftl0", 00:15:51.284 "core_mask": "0x1", 00:15:51.284 "workload": "randwrite", 00:15:51.284 "status": "finished", 00:15:51.284 "queue_depth": 128, 00:15:51.284 "io_size": 4096, 00:15:51.284 "runtime": 4.030957, 00:15:51.284 "iops": 5539.627438347767, 00:15:51.284 "mibps": 21.639169681045964, 00:15:51.284 "io_failed": 0, 00:15:51.284 "io_timeout": 0, 00:15:51.284 "avg_latency_us": 23005.560808295155, 00:15:51.284 "min_latency_us": 523.0276923076923, 00:15:51.284 "max_latency_us": 42547.987692307695 00:15:51.284 } 00:15:51.284 ], 00:15:51.284 "core_count": 1 00:15:51.284 } 00:15:51.284 01:35:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:51.284 [2024-11-17 01:35:59.613061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:51.284 Running I/O for 4 seconds... 00:15:53.175 5005.00 IOPS, 19.55 MiB/s [2024-11-17T01:36:03.021Z] 4728.00 IOPS, 18.47 MiB/s [2024-11-17T01:36:03.964Z] 4690.00 IOPS, 18.32 MiB/s [2024-11-17T01:36:03.964Z] 4683.50 IOPS, 18.29 MiB/s 00:15:55.505 Latency(us) 00:15:55.505 [2024-11-17T01:36:03.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.505 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.505 Verification LBA range: start 0x0 length 0x1400000 00:15:55.505 ftl0 : 4.01 4698.33 18.35 0.00 0.00 27172.39 371.79 54041.99 00:15:55.505 [2024-11-17T01:36:03.964Z] =================================================================================================================== 00:15:55.505 [2024-11-17T01:36:03.965Z] Total : 4698.33 18.35 0.00 0.00 27172.39 0.00 54041.99 00:15:55.506 [2024-11-17 01:36:03.647607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:55.506 { 00:15:55.506 "results": [ 00:15:55.506 { 00:15:55.506 "job": "ftl0", 00:15:55.506 "core_mask": "0x1", 00:15:55.506 "workload": "verify", 00:15:55.506 "status": "finished", 00:15:55.506 "verify_range": { 00:15:55.506 "start": 0, 00:15:55.506 "length": 20971520 00:15:55.506 }, 00:15:55.506 "queue_depth": 128, 00:15:55.506 "io_size": 4096, 00:15:55.506 "runtime": 4.012919, 00:15:55.506 "iops": 4698.325582948472, 00:15:55.506 "mibps": 18.35283430839247, 00:15:55.506 "io_failed": 0, 00:15:55.506 "io_timeout": 0, 00:15:55.506 "avg_latency_us": 27172.387131561558, 00:15:55.506 "min_latency_us": 371.79076923076923, 00:15:55.506 "max_latency_us": 54041.99384615385 00:15:55.506 } 00:15:55.506 ], 00:15:55.506 "core_count": 1 00:15:55.506 } 00:15:55.506 01:36:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:55.506 [2024-11-17 01:36:03.857552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.506 [2024-11-17 01:36:03.857631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:55.506 [2024-11-17 01:36:03.857651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:55.506 [2024-11-17 01:36:03.857662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.506 [2024-11-17 01:36:03.857686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:55.506 [2024-11-17 01:36:03.861104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.506 [2024-11-17 01:36:03.861160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:55.506 [2024-11-17 01:36:03.861175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:15:55.506 [2024-11-17 01:36:03.861184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.506 [2024-11-17 01:36:03.864484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.506 [2024-11-17 01:36:03.864540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:55.506 [2024-11-17 01:36:03.864555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:15:55.506 [2024-11-17 01:36:03.864565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.071718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.071907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:55.769 [2024-11-17 01:36:04.071935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.119 ms 00:15:55.769 [2024-11-17 01:36:04.071944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.078092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.078122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:55.769 [2024-11-17 01:36:04.078136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:15:55.769 [2024-11-17 01:36:04.078144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.102574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.102610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:55.769 [2024-11-17 01:36:04.102623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.365 ms 00:15:55.769 [2024-11-17 01:36:04.102631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.118775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.118822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:55.769 [2024-11-17 01:36:04.118838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.108 ms 00:15:55.769 [2024-11-17 01:36:04.118846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.118984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.118994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:55.769 [2024-11-17 01:36:04.119007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:15:55.769 [2024-11-17 01:36:04.119014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.144616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.144653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:55.769 [2024-11-17 01:36:04.144665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:15:55.769 [2024-11-17 01:36:04.144673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.168776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.168817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:55.769 [2024-11-17 01:36:04.168830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.063 ms 00:15:55.769 [2024-11-17 01:36:04.168837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.191770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.191816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:55.769 [2024-11-17 01:36:04.191828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.896 ms 00:15:55.769 [2024-11-17 01:36:04.191834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.214944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.769 [2024-11-17 01:36:04.214977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:55.769 [2024-11-17 01:36:04.214992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.041 ms 00:15:55.769 [2024-11-17 01:36:04.214999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.769 [2024-11-17 01:36:04.215033] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:55.769 [2024-11-17 01:36:04.215047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:55.769 [2024-11-17 01:36:04.215335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:55.770 [2024-11-17 01:36:04.215956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:55.770 [2024-11-17 01:36:04.215965] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bacc49e-58b7-4576-8b83-fefdfc8718a1 00:15:55.770 [2024-11-17 01:36:04.215972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:55.770 [2024-11-17 01:36:04.215981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:55.770 [2024-11-17 01:36:04.215989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:55.770 [2024-11-17 01:36:04.215999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:55.770 [2024-11-17 01:36:04.216005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:55.770 [2024-11-17 01:36:04.216014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:55.770 [2024-11-17 01:36:04.216020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:55.770 [2024-11-17 01:36:04.216030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:55.770 [2024-11-17 01:36:04.216036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:55.770 [2024-11-17 01:36:04.216059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.770 [2024-11-17 01:36:04.216067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:55.770 [2024-11-17 01:36:04.216081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:15:55.770 [2024-11-17 01:36:04.216088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.228539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.031 [2024-11-17 01:36:04.228569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:56.031 [2024-11-17 01:36:04.228581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.421 ms 00:15:56.031 [2024-11-17 01:36:04.228588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.228964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.031 [2024-11-17 01:36:04.228974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:56.031 [2024-11-17 01:36:04.228984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:15:56.031 [2024-11-17 01:36:04.228991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.264407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.264439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:56.031 [2024-11-17 01:36:04.264453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.264461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.264517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.264525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:56.031 [2024-11-17 01:36:04.264534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.264541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.264602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.264614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:56.031 [2024-11-17 01:36:04.264624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.264631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.264646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.264654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:56.031 [2024-11-17 01:36:04.264663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.264670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.342437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.342616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:56.031 [2024-11-17 01:36:04.342638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.342646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.031 [2024-11-17 01:36:04.406415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.031 [2024-11-17 01:36:04.406455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:56.031 [2024-11-17 01:36:04.406468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.031 [2024-11-17 01:36:04.406475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:56.032 [2024-11-17 01:36:04.406582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:56.032 [2024-11-17 01:36:04.406649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:56.032 [2024-11-17 01:36:04.406766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:56.032 [2024-11-17 01:36:04.406846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:56.032 [2024-11-17 01:36:04.406906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.406958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.032 [2024-11-17 01:36:04.406973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:56.032 [2024-11-17 01:36:04.406983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.032 [2024-11-17 01:36:04.406990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.032 [2024-11-17 01:36:04.407110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.524 ms, result 0 00:15:56.032 true 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73177 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73177 ']' 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73177 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73177 00:15:56.032 killing process with pid 73177 00:15:56.032 Received shutdown signal, test time was about 4.000000 seconds 00:15:56.032 00:15:56.032 Latency(us) 00:15:56.032 [2024-11-17T01:36:04.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.032 [2024-11-17T01:36:04.491Z] =================================================================================================================== 00:15:56.032 [2024-11-17T01:36:04.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73177' 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73177 00:15:56.032 01:36:04 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73177 00:15:56.973 Remove shared memory files 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:15:56.973 ************************************ 00:15:56.973 END TEST ftl_bdevperf 00:15:56.973 ************************************ 00:15:56.973 00:15:56.973 real 0m22.051s 00:15:56.973 user 0m24.547s 00:15:56.973 sys 0m0.894s 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.973 01:36:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:56.973 01:36:05 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:56.973 01:36:05 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:56.973 01:36:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.973 01:36:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:56.973 ************************************ 00:15:56.973 START TEST ftl_trim 00:15:56.973 ************************************ 00:15:56.973 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:56.973 * Looking for test storage... 00:15:56.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:56.973 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.973 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.974 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.235 01:36:05 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.235 --rc genhtml_branch_coverage=1 00:15:57.235 --rc genhtml_function_coverage=1 00:15:57.235 --rc genhtml_legend=1 00:15:57.235 --rc geninfo_all_blocks=1 00:15:57.235 --rc geninfo_unexecuted_blocks=1 00:15:57.235 00:15:57.235 ' 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.235 --rc genhtml_branch_coverage=1 00:15:57.235 --rc genhtml_function_coverage=1 00:15:57.235 --rc genhtml_legend=1 00:15:57.235 --rc geninfo_all_blocks=1 00:15:57.235 --rc geninfo_unexecuted_blocks=1 00:15:57.235 00:15:57.235 ' 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.235 --rc genhtml_branch_coverage=1 00:15:57.235 --rc genhtml_function_coverage=1 00:15:57.235 --rc genhtml_legend=1 00:15:57.235 --rc geninfo_all_blocks=1 00:15:57.235 --rc geninfo_unexecuted_blocks=1 00:15:57.235 00:15:57.235 ' 00:15:57.235 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.235 --rc genhtml_branch_coverage=1 00:15:57.236 --rc genhtml_function_coverage=1 00:15:57.236 --rc genhtml_legend=1 00:15:57.236 --rc geninfo_all_blocks=1 00:15:57.236 --rc geninfo_unexecuted_blocks=1 00:15:57.236 00:15:57.236 ' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73535 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73535 00:15:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73535 ']' 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.236 01:36:05 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:57.236 01:36:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:15:57.236 [2024-11-17 01:36:05.562535] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:57.236 [2024-11-17 01:36:05.562690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73535 ] 00:15:57.495 [2024-11-17 01:36:05.727544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.495 [2024-11-17 01:36:05.857374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.495 [2024-11-17 01:36:05.858047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.495 [2024-11-17 01:36:05.858187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:15:58.439 01:36:06 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:15:58.439 01:36:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:58.699 { 00:15:58.699 "name": "nvme0n1", 00:15:58.699 "aliases": [ 00:15:58.699 "920ab50a-221f-4732-8854-de0d1aabfd2c" 00:15:58.699 ], 00:15:58.699 "product_name": "NVMe disk", 00:15:58.699 "block_size": 4096, 00:15:58.699 "num_blocks": 1310720, 00:15:58.699 "uuid": "920ab50a-221f-4732-8854-de0d1aabfd2c", 00:15:58.699 "numa_id": -1, 00:15:58.699 "assigned_rate_limits": { 00:15:58.699 "rw_ios_per_sec": 0, 00:15:58.699 "rw_mbytes_per_sec": 0, 00:15:58.699 "r_mbytes_per_sec": 0, 00:15:58.699 "w_mbytes_per_sec": 0 00:15:58.699 }, 00:15:58.699 "claimed": true, 00:15:58.699 "claim_type": "read_many_write_one", 00:15:58.699 "zoned": false, 00:15:58.699 "supported_io_types": { 00:15:58.699 "read": true, 00:15:58.699 "write": true, 00:15:58.699 "unmap": true, 00:15:58.699 "flush": true, 00:15:58.699 "reset": true, 00:15:58.699 "nvme_admin": true, 00:15:58.699 "nvme_io": true, 00:15:58.699 "nvme_io_md": false, 00:15:58.699 "write_zeroes": true, 00:15:58.699 "zcopy": false, 00:15:58.699 "get_zone_info": false, 00:15:58.699 "zone_management": false, 00:15:58.699 "zone_append": false, 00:15:58.699 "compare": true, 00:15:58.699 "compare_and_write": false, 00:15:58.699 "abort": true, 00:15:58.699 "seek_hole": false, 00:15:58.699 "seek_data": false, 00:15:58.699 "copy": true, 00:15:58.699 "nvme_iov_md": false 00:15:58.699 }, 00:15:58.699 "driver_specific": { 00:15:58.699 "nvme": [ 00:15:58.699 { 00:15:58.699 "pci_address": "0000:00:11.0", 00:15:58.699 "trid": { 00:15:58.699 "trtype": "PCIe", 00:15:58.699 "traddr": "0000:00:11.0" 00:15:58.699 }, 00:15:58.699 "ctrlr_data": { 00:15:58.699 "cntlid": 0, 00:15:58.699 "vendor_id": "0x1b36", 00:15:58.699 "model_number": "QEMU NVMe Ctrl", 00:15:58.699 "serial_number": "12341", 00:15:58.699 "firmware_revision": "8.0.0", 00:15:58.699 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:58.699 "oacs": { 00:15:58.699 "security": 0, 00:15:58.699 "format": 1, 00:15:58.699 "firmware": 0, 00:15:58.699 "ns_manage": 1 00:15:58.699 }, 00:15:58.699 "multi_ctrlr": false, 00:15:58.699 "ana_reporting": false 00:15:58.699 }, 00:15:58.699 "vs": { 00:15:58.699 "nvme_version": "1.4" 00:15:58.699 }, 00:15:58.699 "ns_data": { 00:15:58.699 "id": 1, 00:15:58.699 "can_share": false 00:15:58.699 } 00:15:58.699 } 00:15:58.699 ], 00:15:58.699 "mp_policy": "active_passive" 00:15:58.699 } 00:15:58.699 } 00:15:58.699 ]' 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:58.699 01:36:07 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:15:58.700 01:36:07 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:15:58.700 01:36:07 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:58.700 01:36:07 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:15:58.700 01:36:07 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:58.700 01:36:07 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:58.960 01:36:07 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c2722f28-c539-4ebe-96f7-9a5edf7bf65c 00:15:58.960 01:36:07 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:15:58.960 01:36:07 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2722f28-c539-4ebe-96f7-9a5edf7bf65c 00:15:59.220 01:36:07 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:59.481 01:36:07 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c144c11f-ddad-4dce-9c9c-2085828e92bf 00:15:59.482 01:36:07 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c144c11f-ddad-4dce-9c9c-2085828e92bf 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:15:59.742 01:36:08 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:15:59.742 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:15:59.742 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:59.742 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:15:59.742 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:15:59.742 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:00.003 { 00:16:00.003 "name": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:00.003 "aliases": [ 00:16:00.003 "lvs/nvme0n1p0" 00:16:00.003 ], 00:16:00.003 "product_name": "Logical Volume", 00:16:00.003 "block_size": 4096, 00:16:00.003 "num_blocks": 26476544, 00:16:00.003 "uuid": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:00.003 "assigned_rate_limits": { 00:16:00.003 "rw_ios_per_sec": 0, 00:16:00.003 "rw_mbytes_per_sec": 0, 00:16:00.003 "r_mbytes_per_sec": 0, 00:16:00.003 "w_mbytes_per_sec": 0 00:16:00.003 }, 00:16:00.003 "claimed": false, 00:16:00.003 "zoned": false, 00:16:00.003 "supported_io_types": { 00:16:00.003 "read": true, 00:16:00.003 "write": true, 00:16:00.003 "unmap": true, 00:16:00.003 "flush": false, 00:16:00.003 "reset": true, 00:16:00.003 "nvme_admin": false, 00:16:00.003 "nvme_io": false, 00:16:00.003 "nvme_io_md": false, 00:16:00.003 "write_zeroes": true, 00:16:00.003 "zcopy": false, 00:16:00.003 "get_zone_info": false, 00:16:00.003 "zone_management": false, 00:16:00.003 "zone_append": false, 00:16:00.003 "compare": false, 00:16:00.003 "compare_and_write": false, 00:16:00.003 "abort": false, 00:16:00.003 "seek_hole": true, 00:16:00.003 "seek_data": true, 00:16:00.003 "copy": false, 00:16:00.003 "nvme_iov_md": false 00:16:00.003 }, 00:16:00.003 "driver_specific": { 00:16:00.003 "lvol": { 00:16:00.003 "lvol_store_uuid": "c144c11f-ddad-4dce-9c9c-2085828e92bf", 00:16:00.003 "base_bdev": "nvme0n1", 00:16:00.003 "thin_provision": true, 00:16:00.003 "num_allocated_clusters": 0, 00:16:00.003 "snapshot": false, 00:16:00.003 "clone": false, 00:16:00.003 "esnap_clone": false 00:16:00.003 } 00:16:00.003 } 00:16:00.003 } 00:16:00.003 ]' 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:00.003 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:00.003 01:36:08 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:00.003 01:36:08 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:00.003 01:36:08 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:00.264 01:36:08 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:00.264 01:36:08 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:00.264 01:36:08 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.264 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.264 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:00.264 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:00.264 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:00.264 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:00.526 { 00:16:00.526 "name": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:00.526 "aliases": [ 00:16:00.526 "lvs/nvme0n1p0" 00:16:00.526 ], 00:16:00.526 "product_name": "Logical Volume", 00:16:00.526 "block_size": 4096, 00:16:00.526 "num_blocks": 26476544, 00:16:00.526 "uuid": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:00.526 "assigned_rate_limits": { 00:16:00.526 "rw_ios_per_sec": 0, 00:16:00.526 "rw_mbytes_per_sec": 0, 00:16:00.526 "r_mbytes_per_sec": 0, 00:16:00.526 "w_mbytes_per_sec": 0 00:16:00.526 }, 00:16:00.526 "claimed": false, 00:16:00.526 "zoned": false, 00:16:00.526 "supported_io_types": { 00:16:00.526 "read": true, 00:16:00.526 "write": true, 00:16:00.526 "unmap": true, 00:16:00.526 "flush": false, 00:16:00.526 "reset": true, 00:16:00.526 "nvme_admin": false, 00:16:00.526 "nvme_io": false, 00:16:00.526 "nvme_io_md": false, 00:16:00.526 "write_zeroes": true, 00:16:00.526 "zcopy": false, 00:16:00.526 "get_zone_info": false, 00:16:00.526 "zone_management": false, 00:16:00.526 "zone_append": false, 00:16:00.526 "compare": false, 00:16:00.526 "compare_and_write": false, 00:16:00.526 "abort": false, 00:16:00.526 "seek_hole": true, 00:16:00.526 "seek_data": true, 00:16:00.526 "copy": false, 00:16:00.526 "nvme_iov_md": false 00:16:00.526 }, 00:16:00.526 "driver_specific": { 00:16:00.526 "lvol": { 00:16:00.526 "lvol_store_uuid": "c144c11f-ddad-4dce-9c9c-2085828e92bf", 00:16:00.526 "base_bdev": "nvme0n1", 00:16:00.526 "thin_provision": true, 00:16:00.526 "num_allocated_clusters": 0, 00:16:00.526 "snapshot": false, 00:16:00.526 "clone": false, 00:16:00.526 "esnap_clone": false 00:16:00.526 } 00:16:00.526 } 00:16:00.526 } 00:16:00.526 ]' 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:00.526 01:36:08 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:00.526 01:36:08 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:00.526 01:36:08 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:00.787 01:36:09 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:00.787 01:36:09 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:00.787 01:36:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.787 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:00.787 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:00.787 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:00.787 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:00.787 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e84e7bea-015b-4a8c-b2bd-c594e3f8c982 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:01.048 { 00:16:01.048 "name": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:01.048 "aliases": [ 00:16:01.048 "lvs/nvme0n1p0" 00:16:01.048 ], 00:16:01.048 "product_name": "Logical Volume", 00:16:01.048 "block_size": 4096, 00:16:01.048 "num_blocks": 26476544, 00:16:01.048 "uuid": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:01.048 "assigned_rate_limits": { 00:16:01.048 "rw_ios_per_sec": 0, 00:16:01.048 "rw_mbytes_per_sec": 0, 00:16:01.048 "r_mbytes_per_sec": 0, 00:16:01.048 "w_mbytes_per_sec": 0 00:16:01.048 }, 00:16:01.048 "claimed": false, 00:16:01.048 "zoned": false, 00:16:01.048 "supported_io_types": { 00:16:01.048 "read": true, 00:16:01.048 "write": true, 00:16:01.048 "unmap": true, 00:16:01.048 "flush": false, 00:16:01.048 "reset": true, 00:16:01.048 "nvme_admin": false, 00:16:01.048 "nvme_io": false, 00:16:01.048 "nvme_io_md": false, 00:16:01.048 "write_zeroes": true, 00:16:01.048 "zcopy": false, 00:16:01.048 "get_zone_info": false, 00:16:01.048 "zone_management": false, 00:16:01.048 "zone_append": false, 00:16:01.048 "compare": false, 00:16:01.048 "compare_and_write": false, 00:16:01.048 "abort": false, 00:16:01.048 "seek_hole": true, 00:16:01.048 "seek_data": true, 00:16:01.048 "copy": false, 00:16:01.048 "nvme_iov_md": false 00:16:01.048 }, 00:16:01.048 "driver_specific": { 00:16:01.048 "lvol": { 00:16:01.048 "lvol_store_uuid": "c144c11f-ddad-4dce-9c9c-2085828e92bf", 00:16:01.048 "base_bdev": "nvme0n1", 00:16:01.048 "thin_provision": true, 00:16:01.048 "num_allocated_clusters": 0, 00:16:01.048 "snapshot": false, 00:16:01.048 "clone": false, 00:16:01.048 "esnap_clone": false 00:16:01.048 } 00:16:01.048 } 00:16:01.048 } 00:16:01.048 ]' 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:01.048 01:36:09 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:01.048 01:36:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:01.048 01:36:09 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e84e7bea-015b-4a8c-b2bd-c594e3f8c982 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:01.048 [2024-11-17 01:36:09.504627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.048 [2024-11-17 01:36:09.504664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.048 [2024-11-17 01:36:09.504676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:01.048 [2024-11-17 01:36:09.504684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.506971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.507001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.310 [2024-11-17 01:36:09.507010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:16:01.310 [2024-11-17 01:36:09.507018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.507090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.310 [2024-11-17 01:36:09.507601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.310 [2024-11-17 01:36:09.507624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.507632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.310 [2024-11-17 01:36:09.507640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:16:01.310 [2024-11-17 01:36:09.507645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.507774] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:01.310 [2024-11-17 01:36:09.508836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.508864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:01.310 [2024-11-17 01:36:09.508874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:01.310 [2024-11-17 01:36:09.508881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.514170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.514196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.310 [2024-11-17 01:36:09.514208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:16:01.310 [2024-11-17 01:36:09.514216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.514309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.514320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.310 [2024-11-17 01:36:09.514327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:01.310 [2024-11-17 01:36:09.514337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.514361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.514370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.310 [2024-11-17 01:36:09.514377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.310 [2024-11-17 01:36:09.514384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.514404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:01.310 [2024-11-17 01:36:09.517339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.517364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.310 [2024-11-17 01:36:09.517376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:16:01.310 [2024-11-17 01:36:09.517383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.517419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.310 [2024-11-17 01:36:09.517426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.310 [2024-11-17 01:36:09.517434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:01.310 [2024-11-17 01:36:09.517449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.310 [2024-11-17 01:36:09.517467] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:01.310 [2024-11-17 01:36:09.517572] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:01.310 [2024-11-17 01:36:09.517590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.310 [2024-11-17 01:36:09.517599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:01.310 [2024-11-17 01:36:09.517609] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.310 [2024-11-17 01:36:09.517616] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.310 [2024-11-17 01:36:09.517624] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:01.310 [2024-11-17 01:36:09.517631] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.310 [2024-11-17 01:36:09.517638] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:01.311 [2024-11-17 01:36:09.517645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:01.311 [2024-11-17 01:36:09.517654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.311 [2024-11-17 01:36:09.517659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.311 [2024-11-17 01:36:09.517666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:16:01.311 [2024-11-17 01:36:09.517673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.311 [2024-11-17 01:36:09.517743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.311 [2024-11-17 01:36:09.517750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.311 [2024-11-17 01:36:09.517758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:01.311 [2024-11-17 01:36:09.517763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.311 [2024-11-17 01:36:09.517859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.311 [2024-11-17 01:36:09.517873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.311 [2024-11-17 01:36:09.517881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.311 [2024-11-17 01:36:09.517889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.517896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.311 [2024-11-17 01:36:09.517902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.517909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:01.311 [2024-11-17 01:36:09.517915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.311 [2024-11-17 01:36:09.517922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:01.311 [2024-11-17 01:36:09.517927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.311 [2024-11-17 01:36:09.517934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.311 [2024-11-17 01:36:09.517939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:01.311 [2024-11-17 01:36:09.517946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.311 [2024-11-17 01:36:09.517952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.311 [2024-11-17 01:36:09.517959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:01.311 [2024-11-17 01:36:09.517964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.517971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.311 [2024-11-17 01:36:09.517976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:01.311 [2024-11-17 01:36:09.517984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.517989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.311 [2024-11-17 01:36:09.517997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.311 [2024-11-17 01:36:09.518015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.311 [2024-11-17 01:36:09.518033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.311 [2024-11-17 01:36:09.518051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.311 [2024-11-17 01:36:09.518070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.311 [2024-11-17 01:36:09.518082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.311 [2024-11-17 01:36:09.518087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:01.311 [2024-11-17 01:36:09.518096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.311 [2024-11-17 01:36:09.518102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:01.311 [2024-11-17 01:36:09.518108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:01.311 [2024-11-17 01:36:09.518113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:01.311 [2024-11-17 01:36:09.518124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:01.311 [2024-11-17 01:36:09.518132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.311 [2024-11-17 01:36:09.518143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.311 [2024-11-17 01:36:09.518149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.311 [2024-11-17 01:36:09.518161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.311 [2024-11-17 01:36:09.518170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.311 [2024-11-17 01:36:09.518175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.311 [2024-11-17 01:36:09.518183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.311 [2024-11-17 01:36:09.518188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.311 [2024-11-17 01:36:09.518193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.311 [2024-11-17 01:36:09.518201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.311 [2024-11-17 01:36:09.518209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.311 [2024-11-17 01:36:09.518216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:01.311 [2024-11-17 01:36:09.518222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:01.311 [2024-11-17 01:36:09.518228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:01.311 [2024-11-17 01:36:09.518235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:01.311 [2024-11-17 01:36:09.518240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:01.311 [2024-11-17 01:36:09.518246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:01.311 [2024-11-17 01:36:09.518253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:01.311 [2024-11-17 01:36:09.518260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:01.311 [2024-11-17 01:36:09.518266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:01.312 [2024-11-17 01:36:09.518273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:01.312 [2024-11-17 01:36:09.518305] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.312 [2024-11-17 01:36:09.518317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.312 [2024-11-17 01:36:09.518331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.312 [2024-11-17 01:36:09.518337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.312 [2024-11-17 01:36:09.518343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.312 [2024-11-17 01:36:09.518349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.312 [2024-11-17 01:36:09.518356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.312 [2024-11-17 01:36:09.518362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:16:01.312 [2024-11-17 01:36:09.518368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.312 [2024-11-17 01:36:09.518428] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:01.312 [2024-11-17 01:36:09.518443] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:03.859 [2024-11-17 01:36:12.160574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.160624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:03.859 [2024-11-17 01:36:12.160638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2642.134 ms 00:16:03.859 [2024-11-17 01:36:12.160648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.186266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.186309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:03.859 [2024-11-17 01:36:12.186320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.376 ms 00:16:03.859 [2024-11-17 01:36:12.186330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.186457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.186474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:03.859 [2024-11-17 01:36:12.186483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:03.859 [2024-11-17 01:36:12.186494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.227849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.227893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:03.859 [2024-11-17 01:36:12.227907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.304 ms 00:16:03.859 [2024-11-17 01:36:12.227919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.228015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.228029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:03.859 [2024-11-17 01:36:12.228038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:03.859 [2024-11-17 01:36:12.228047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.228423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.228466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:03.859 [2024-11-17 01:36:12.228479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:16:03.859 [2024-11-17 01:36:12.228492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.228647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.228667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:03.859 [2024-11-17 01:36:12.228678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:03.859 [2024-11-17 01:36:12.228694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.859 [2024-11-17 01:36:12.246690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.859 [2024-11-17 01:36:12.246726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:03.860 [2024-11-17 01:36:12.246736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.938 ms 00:16:03.860 [2024-11-17 01:36:12.246746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.860 [2024-11-17 01:36:12.258673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:03.860 [2024-11-17 01:36:12.275802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.860 [2024-11-17 01:36:12.275839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:03.860 [2024-11-17 01:36:12.275852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.920 ms 00:16:03.860 [2024-11-17 01:36:12.275861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.373547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.373612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:04.121 [2024-11-17 01:36:12.373630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.608 ms 00:16:04.121 [2024-11-17 01:36:12.373640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.373917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.373936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:04.121 [2024-11-17 01:36:12.373953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:04.121 [2024-11-17 01:36:12.373961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.399628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.399680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:04.121 [2024-11-17 01:36:12.399709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.618 ms 00:16:04.121 [2024-11-17 01:36:12.399719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.425319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.425369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:04.121 [2024-11-17 01:36:12.425386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.461 ms 00:16:04.121 [2024-11-17 01:36:12.425394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.426122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.426154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:04.121 [2024-11-17 01:36:12.426167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:16:04.121 [2024-11-17 01:36:12.426175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.512859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.512929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:04.121 [2024-11-17 01:36:12.512953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.632 ms 00:16:04.121 [2024-11-17 01:36:12.512962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.541092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.541146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:04.121 [2024-11-17 01:36:12.541163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.973 ms 00:16:04.121 [2024-11-17 01:36:12.541172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.121 [2024-11-17 01:36:12.567457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.121 [2024-11-17 01:36:12.567504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:04.121 [2024-11-17 01:36:12.567520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:16:04.121 [2024-11-17 01:36:12.567529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.382 [2024-11-17 01:36:12.594468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.382 [2024-11-17 01:36:12.594518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:04.382 [2024-11-17 01:36:12.594534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.838 ms 00:16:04.382 [2024-11-17 01:36:12.594558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.382 [2024-11-17 01:36:12.594679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.382 [2024-11-17 01:36:12.594699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:04.382 [2024-11-17 01:36:12.594715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:04.382 [2024-11-17 01:36:12.594723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.382 [2024-11-17 01:36:12.594845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.382 [2024-11-17 01:36:12.594859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:04.382 [2024-11-17 01:36:12.594871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:04.382 [2024-11-17 01:36:12.594879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.382 [2024-11-17 01:36:12.596111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:04.382 [2024-11-17 01:36:12.599766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3091.042 ms, result 0 00:16:04.382 [2024-11-17 01:36:12.601331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:04.382 { 00:16:04.382 "name": "ftl0", 00:16:04.382 "uuid": "fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb" 00:16:04.382 } 00:16:04.382 01:36:12 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:04.382 01:36:12 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:04.643 [ 00:16:04.643 { 00:16:04.643 "name": "ftl0", 00:16:04.643 "aliases": [ 00:16:04.643 "fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb" 00:16:04.643 ], 00:16:04.643 "product_name": "FTL disk", 00:16:04.643 "block_size": 4096, 00:16:04.643 "num_blocks": 23592960, 00:16:04.643 "uuid": "fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb", 00:16:04.643 "assigned_rate_limits": { 00:16:04.643 "rw_ios_per_sec": 0, 00:16:04.643 "rw_mbytes_per_sec": 0, 00:16:04.643 "r_mbytes_per_sec": 0, 00:16:04.643 "w_mbytes_per_sec": 0 00:16:04.643 }, 00:16:04.643 "claimed": false, 00:16:04.643 "zoned": false, 00:16:04.643 "supported_io_types": { 00:16:04.643 "read": true, 00:16:04.643 "write": true, 00:16:04.643 "unmap": true, 00:16:04.643 "flush": true, 00:16:04.643 "reset": false, 00:16:04.643 "nvme_admin": false, 00:16:04.643 "nvme_io": false, 00:16:04.643 "nvme_io_md": false, 00:16:04.643 "write_zeroes": true, 00:16:04.643 "zcopy": false, 00:16:04.643 "get_zone_info": false, 00:16:04.643 "zone_management": false, 00:16:04.643 "zone_append": false, 00:16:04.643 "compare": false, 00:16:04.643 "compare_and_write": false, 00:16:04.643 "abort": false, 00:16:04.643 "seek_hole": false, 00:16:04.643 "seek_data": false, 00:16:04.643 "copy": false, 00:16:04.643 "nvme_iov_md": false 00:16:04.643 }, 00:16:04.643 "driver_specific": { 00:16:04.643 "ftl": { 00:16:04.643 "base_bdev": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:04.643 "cache": "nvc0n1p0" 00:16:04.643 } 00:16:04.643 } 00:16:04.643 } 00:16:04.643 ] 00:16:04.643 01:36:13 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:04.643 01:36:13 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:04.643 01:36:13 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:04.905 01:36:13 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:04.905 01:36:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:05.166 01:36:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:05.166 { 00:16:05.166 "name": "ftl0", 00:16:05.166 "aliases": [ 00:16:05.166 "fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb" 00:16:05.166 ], 00:16:05.166 "product_name": "FTL disk", 00:16:05.166 "block_size": 4096, 00:16:05.166 "num_blocks": 23592960, 00:16:05.166 "uuid": "fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb", 00:16:05.166 "assigned_rate_limits": { 00:16:05.166 "rw_ios_per_sec": 0, 00:16:05.166 "rw_mbytes_per_sec": 0, 00:16:05.166 "r_mbytes_per_sec": 0, 00:16:05.166 "w_mbytes_per_sec": 0 00:16:05.166 }, 00:16:05.166 "claimed": false, 00:16:05.166 "zoned": false, 00:16:05.166 "supported_io_types": { 00:16:05.166 "read": true, 00:16:05.166 "write": true, 00:16:05.166 "unmap": true, 00:16:05.166 "flush": true, 00:16:05.166 "reset": false, 00:16:05.166 "nvme_admin": false, 00:16:05.166 "nvme_io": false, 00:16:05.166 "nvme_io_md": false, 00:16:05.166 "write_zeroes": true, 00:16:05.166 "zcopy": false, 00:16:05.166 "get_zone_info": false, 00:16:05.166 "zone_management": false, 00:16:05.166 "zone_append": false, 00:16:05.166 "compare": false, 00:16:05.166 "compare_and_write": false, 00:16:05.166 "abort": false, 00:16:05.166 "seek_hole": false, 00:16:05.166 "seek_data": false, 00:16:05.166 "copy": false, 00:16:05.166 "nvme_iov_md": false 00:16:05.166 }, 00:16:05.166 "driver_specific": { 00:16:05.166 "ftl": { 00:16:05.166 "base_bdev": "e84e7bea-015b-4a8c-b2bd-c594e3f8c982", 00:16:05.166 "cache": "nvc0n1p0" 00:16:05.166 } 00:16:05.166 } 00:16:05.166 } 00:16:05.166 ]' 00:16:05.166 01:36:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:05.166 01:36:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:05.166 01:36:13 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:05.428 [2024-11-17 01:36:13.724817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.724874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:05.428 [2024-11-17 01:36:13.724894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:05.428 [2024-11-17 01:36:13.724908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.724953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:05.428 [2024-11-17 01:36:13.728004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.728050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:05.428 [2024-11-17 01:36:13.728068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:16:05.428 [2024-11-17 01:36:13.728076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.729019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.729044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:05.428 [2024-11-17 01:36:13.729057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:16:05.428 [2024-11-17 01:36:13.729065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.732739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.732767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:05.428 [2024-11-17 01:36:13.732781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:16:05.428 [2024-11-17 01:36:13.732802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.739900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.739943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:05.428 [2024-11-17 01:36:13.739958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.019 ms 00:16:05.428 [2024-11-17 01:36:13.739967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.768237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.768287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:05.428 [2024-11-17 01:36:13.768307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.127 ms 00:16:05.428 [2024-11-17 01:36:13.768315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.786697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.786747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:05.428 [2024-11-17 01:36:13.786764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.257 ms 00:16:05.428 [2024-11-17 01:36:13.786777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.787123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.787142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:05.428 [2024-11-17 01:36:13.787155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:16:05.428 [2024-11-17 01:36:13.787163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.813473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.813521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:05.428 [2024-11-17 01:36:13.813536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.251 ms 00:16:05.428 [2024-11-17 01:36:13.813544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.839252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.839297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:05.428 [2024-11-17 01:36:13.839316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.575 ms 00:16:05.428 [2024-11-17 01:36:13.839324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.428 [2024-11-17 01:36:13.864487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.428 [2024-11-17 01:36:13.864533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:05.428 [2024-11-17 01:36:13.864548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.021 ms 00:16:05.428 [2024-11-17 01:36:13.864555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.889417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.689 [2024-11-17 01:36:13.889463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:05.689 [2024-11-17 01:36:13.889479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.639 ms 00:16:05.689 [2024-11-17 01:36:13.889487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.889586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:05.689 [2024-11-17 01:36:13.889604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.889998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:05.689 [2024-11-17 01:36:13.890540] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:05.689 [2024-11-17 01:36:13.890552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:05.689 [2024-11-17 01:36:13.890561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:05.689 [2024-11-17 01:36:13.890569] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:05.689 [2024-11-17 01:36:13.890577] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:05.689 [2024-11-17 01:36:13.890587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:05.689 [2024-11-17 01:36:13.890597] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:05.689 [2024-11-17 01:36:13.890614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:05.689 [2024-11-17 01:36:13.890622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:05.689 [2024-11-17 01:36:13.890631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:05.689 [2024-11-17 01:36:13.890637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:05.689 [2024-11-17 01:36:13.890646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.689 [2024-11-17 01:36:13.890654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:05.689 [2024-11-17 01:36:13.890672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:16:05.689 [2024-11-17 01:36:13.890679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.904565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.689 [2024-11-17 01:36:13.904606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:05.689 [2024-11-17 01:36:13.904626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.809 ms 00:16:05.689 [2024-11-17 01:36:13.904634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.905136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.689 [2024-11-17 01:36:13.905157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:05.689 [2024-11-17 01:36:13.905171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:16:05.689 [2024-11-17 01:36:13.905179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.955182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.689 [2024-11-17 01:36:13.955237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:05.689 [2024-11-17 01:36:13.955251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.689 [2024-11-17 01:36:13.955260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.689 [2024-11-17 01:36:13.955410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.689 [2024-11-17 01:36:13.955422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:05.689 [2024-11-17 01:36:13.955433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.689 [2024-11-17 01:36:13.955441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:13.955540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:13.955551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:05.690 [2024-11-17 01:36:13.955567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:13.955575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:13.955643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:13.955653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:05.690 [2024-11-17 01:36:13.955663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:13.955671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.047173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.047236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:05.690 [2024-11-17 01:36:14.047250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.047258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.118254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:05.690 [2024-11-17 01:36:14.118270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.118280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.118431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.690 [2024-11-17 01:36:14.118464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.118477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.118580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.690 [2024-11-17 01:36:14.118591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.118599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.118757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.690 [2024-11-17 01:36:14.118768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.118777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.118891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:05.690 [2024-11-17 01:36:14.118901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.118910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.118980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.119007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.690 [2024-11-17 01:36:14.119022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.119030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.119110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.690 [2024-11-17 01:36:14.119123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.690 [2024-11-17 01:36:14.119136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.690 [2024-11-17 01:36:14.119145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.690 [2024-11-17 01:36:14.119412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.591 ms, result 0 00:16:05.690 true 00:16:05.690 01:36:14 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73535 00:16:05.690 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73535 ']' 00:16:05.690 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73535 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73535 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.949 killing process with pid 73535 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73535' 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73535 00:16:05.949 01:36:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73535 00:16:12.522 01:36:20 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:13.491 65536+0 records in 00:16:13.491 65536+0 records out 00:16:13.491 268435456 bytes (268 MB, 256 MiB) copied, 1.09304 s, 246 MB/s 00:16:13.491 01:36:21 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:13.491 [2024-11-17 01:36:21.687638] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:13.491 [2024-11-17 01:36:21.687766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73724 ] 00:16:13.491 [2024-11-17 01:36:21.837548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.491 [2024-11-17 01:36:21.913386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.752 [2024-11-17 01:36:22.119203] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.752 [2024-11-17 01:36:22.119254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:14.014 [2024-11-17 01:36:22.266871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.014 [2024-11-17 01:36:22.266909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:14.014 [2024-11-17 01:36:22.266919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.014 [2024-11-17 01:36:22.266925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.014 [2024-11-17 01:36:22.269023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.014 [2024-11-17 01:36:22.269053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.014 [2024-11-17 01:36:22.269061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:16:14.014 [2024-11-17 01:36:22.269067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.014 [2024-11-17 01:36:22.269125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:14.014 [2024-11-17 01:36:22.269665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:14.014 [2024-11-17 01:36:22.269682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.014 [2024-11-17 01:36:22.269689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.015 [2024-11-17 01:36:22.269696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:16:14.015 [2024-11-17 01:36:22.269702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.270684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:14.015 [2024-11-17 01:36:22.280084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.280117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:14.015 [2024-11-17 01:36:22.280126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.401 ms 00:16:14.015 [2024-11-17 01:36:22.280132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.280198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.280207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:14.015 [2024-11-17 01:36:22.280213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:14.015 [2024-11-17 01:36:22.280218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.284610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.284636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.015 [2024-11-17 01:36:22.284643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:16:14.015 [2024-11-17 01:36:22.284649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.284722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.284730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.015 [2024-11-17 01:36:22.284736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:14.015 [2024-11-17 01:36:22.284743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.284758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.284766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:14.015 [2024-11-17 01:36:22.284772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:14.015 [2024-11-17 01:36:22.284778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.284807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:14.015 [2024-11-17 01:36:22.287365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.287390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.015 [2024-11-17 01:36:22.287397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:16:14.015 [2024-11-17 01:36:22.287402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.287429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.287436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:14.015 [2024-11-17 01:36:22.287442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:14.015 [2024-11-17 01:36:22.287448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.287461] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:14.015 [2024-11-17 01:36:22.287477] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:14.015 [2024-11-17 01:36:22.287503] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:14.015 [2024-11-17 01:36:22.287514] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:14.015 [2024-11-17 01:36:22.287598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:14.015 [2024-11-17 01:36:22.287606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:14.015 [2024-11-17 01:36:22.287614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:14.015 [2024-11-17 01:36:22.287622] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287637] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:14.015 [2024-11-17 01:36:22.287643] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:14.015 [2024-11-17 01:36:22.287649] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:14.015 [2024-11-17 01:36:22.287655] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:14.015 [2024-11-17 01:36:22.287662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.287667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:14.015 [2024-11-17 01:36:22.287673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:14.015 [2024-11-17 01:36:22.287685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.287753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.015 [2024-11-17 01:36:22.287760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:14.015 [2024-11-17 01:36:22.287768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:14.015 [2024-11-17 01:36:22.287773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.015 [2024-11-17 01:36:22.287859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:14.015 [2024-11-17 01:36:22.287868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:14.015 [2024-11-17 01:36:22.287874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:14.015 [2024-11-17 01:36:22.287893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:14.015 [2024-11-17 01:36:22.287909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.015 [2024-11-17 01:36:22.287921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:14.015 [2024-11-17 01:36:22.287927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:14.015 [2024-11-17 01:36:22.287932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.015 [2024-11-17 01:36:22.287942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:14.015 [2024-11-17 01:36:22.287947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:14.015 [2024-11-17 01:36:22.287952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:14.015 [2024-11-17 01:36:22.287963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:14.015 [2024-11-17 01:36:22.287977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.015 [2024-11-17 01:36:22.287987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:14.015 [2024-11-17 01:36:22.287993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:14.015 [2024-11-17 01:36:22.287998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.015 [2024-11-17 01:36:22.288003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:14.015 [2024-11-17 01:36:22.288008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.015 [2024-11-17 01:36:22.288018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:14.015 [2024-11-17 01:36:22.288024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.015 [2024-11-17 01:36:22.288034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:14.015 [2024-11-17 01:36:22.288039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.015 [2024-11-17 01:36:22.288049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:14.015 [2024-11-17 01:36:22.288054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:14.015 [2024-11-17 01:36:22.288059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.015 [2024-11-17 01:36:22.288064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:14.015 [2024-11-17 01:36:22.288070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:14.015 [2024-11-17 01:36:22.288075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:14.015 [2024-11-17 01:36:22.288085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:14.015 [2024-11-17 01:36:22.288091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288095] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:14.015 [2024-11-17 01:36:22.288101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:14.015 [2024-11-17 01:36:22.288106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.015 [2024-11-17 01:36:22.288114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.015 [2024-11-17 01:36:22.288120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:14.015 [2024-11-17 01:36:22.288125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:14.015 [2024-11-17 01:36:22.288130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:14.015 [2024-11-17 01:36:22.288136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:14.016 [2024-11-17 01:36:22.288140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:14.016 [2024-11-17 01:36:22.288145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:14.016 [2024-11-17 01:36:22.288152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:14.016 [2024-11-17 01:36:22.288159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:14.016 [2024-11-17 01:36:22.288171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:14.016 [2024-11-17 01:36:22.288176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:14.016 [2024-11-17 01:36:22.288182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:14.016 [2024-11-17 01:36:22.288187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:14.016 [2024-11-17 01:36:22.288193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:14.016 [2024-11-17 01:36:22.288198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:14.016 [2024-11-17 01:36:22.288204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:14.016 [2024-11-17 01:36:22.288209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:14.016 [2024-11-17 01:36:22.288215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:14.016 [2024-11-17 01:36:22.288242] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:14.016 [2024-11-17 01:36:22.288248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:14.016 [2024-11-17 01:36:22.288259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:14.016 [2024-11-17 01:36:22.288265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:14.016 [2024-11-17 01:36:22.288271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:14.016 [2024-11-17 01:36:22.288276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.288282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:14.016 [2024-11-17 01:36:22.288289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:16:14.016 [2024-11-17 01:36:22.288295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.309141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.309169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.016 [2024-11-17 01:36:22.309177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.809 ms 00:16:14.016 [2024-11-17 01:36:22.309183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.309274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.309284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:14.016 [2024-11-17 01:36:22.309291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:14.016 [2024-11-17 01:36:22.309296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.344438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.344470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.016 [2024-11-17 01:36:22.344480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.126 ms 00:16:14.016 [2024-11-17 01:36:22.344488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.344546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.344556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.016 [2024-11-17 01:36:22.344562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:14.016 [2024-11-17 01:36:22.344568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.344874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.344888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.016 [2024-11-17 01:36:22.344895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:16:14.016 [2024-11-17 01:36:22.344902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.345008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.345029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.016 [2024-11-17 01:36:22.345036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:14.016 [2024-11-17 01:36:22.345041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.355806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.355834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.016 [2024-11-17 01:36:22.355842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.749 ms 00:16:14.016 [2024-11-17 01:36:22.355848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.365515] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:14.016 [2024-11-17 01:36:22.365545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:14.016 [2024-11-17 01:36:22.365555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.365562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:14.016 [2024-11-17 01:36:22.365568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.632 ms 00:16:14.016 [2024-11-17 01:36:22.365573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.384275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.384305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:14.016 [2024-11-17 01:36:22.384320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.654 ms 00:16:14.016 [2024-11-17 01:36:22.384326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.393209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.393235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:14.016 [2024-11-17 01:36:22.393242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.841 ms 00:16:14.016 [2024-11-17 01:36:22.393247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.401770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.401801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:14.016 [2024-11-17 01:36:22.401808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.482 ms 00:16:14.016 [2024-11-17 01:36:22.401814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.402275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.402297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:14.016 [2024-11-17 01:36:22.402304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:14.016 [2024-11-17 01:36:22.402310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.446494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.446532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:14.016 [2024-11-17 01:36:22.446542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.168 ms 00:16:14.016 [2024-11-17 01:36:22.446549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.454222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:14.016 [2024-11-17 01:36:22.465901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.465930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:14.016 [2024-11-17 01:36:22.465940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.285 ms 00:16:14.016 [2024-11-17 01:36:22.465946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.466018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.466026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:14.016 [2024-11-17 01:36:22.466033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:14.016 [2024-11-17 01:36:22.466039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.466073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.466080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:14.016 [2024-11-17 01:36:22.466086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:14.016 [2024-11-17 01:36:22.466092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.466113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.016 [2024-11-17 01:36:22.466122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:14.016 [2024-11-17 01:36:22.466128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.016 [2024-11-17 01:36:22.466134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.016 [2024-11-17 01:36:22.466157] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:14.016 [2024-11-17 01:36:22.466164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.017 [2024-11-17 01:36:22.466170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:14.017 [2024-11-17 01:36:22.466176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:14.017 [2024-11-17 01:36:22.466181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.277 [2024-11-17 01:36:22.484178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.277 [2024-11-17 01:36:22.484207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:14.277 [2024-11-17 01:36:22.484216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.983 ms 00:16:14.277 [2024-11-17 01:36:22.484222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.277 [2024-11-17 01:36:22.484293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.277 [2024-11-17 01:36:22.484302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:14.277 [2024-11-17 01:36:22.484308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:14.277 [2024-11-17 01:36:22.484314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.277 [2024-11-17 01:36:22.485283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.277 [2024-11-17 01:36:22.487576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 218.187 ms, result 0 00:16:14.277 [2024-11-17 01:36:22.488192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.277 [2024-11-17 01:36:22.502974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.218  [2024-11-17T01:36:24.622Z] Copying: 20/256 [MB] (20 MBps) [2024-11-17T01:36:25.567Z] Copying: 37/256 [MB] (16 MBps) [2024-11-17T01:36:26.510Z] Copying: 57/256 [MB] (19 MBps) [2024-11-17T01:36:27.897Z] Copying: 76/256 [MB] (19 MBps) [2024-11-17T01:36:28.840Z] Copying: 105/256 [MB] (28 MBps) [2024-11-17T01:36:29.788Z] Copying: 139/256 [MB] (34 MBps) [2024-11-17T01:36:30.730Z] Copying: 157/256 [MB] (18 MBps) [2024-11-17T01:36:31.673Z] Copying: 172/256 [MB] (14 MBps) [2024-11-17T01:36:32.617Z] Copying: 194/256 [MB] (21 MBps) [2024-11-17T01:36:33.560Z] Copying: 211/256 [MB] (17 MBps) [2024-11-17T01:36:34.947Z] Copying: 237/256 [MB] (25 MBps) [2024-11-17T01:36:34.947Z] Copying: 252/256 [MB] (15 MBps) [2024-11-17T01:36:34.947Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-17 01:36:34.812260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:26.488 [2024-11-17 01:36:34.822730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.822779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:26.488 [2024-11-17 01:36:34.822805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:26.488 [2024-11-17 01:36:34.822816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.822848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:26.488 [2024-11-17 01:36:34.825836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.825877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:26.488 [2024-11-17 01:36:34.825889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:16:26.488 [2024-11-17 01:36:34.825898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.829022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.829071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:26.488 [2024-11-17 01:36:34.829083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:16:26.488 [2024-11-17 01:36:34.829091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.837038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.837095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:26.488 [2024-11-17 01:36:34.837106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.928 ms 00:16:26.488 [2024-11-17 01:36:34.837114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.844035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.844074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:26.488 [2024-11-17 01:36:34.844085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:16:26.488 [2024-11-17 01:36:34.844094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.869381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.869428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:26.488 [2024-11-17 01:36:34.869441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.226 ms 00:16:26.488 [2024-11-17 01:36:34.869448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.885752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.885821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:26.488 [2024-11-17 01:36:34.885834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.238 ms 00:16:26.488 [2024-11-17 01:36:34.885846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.885974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.885984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:26.488 [2024-11-17 01:36:34.885994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:26.488 [2024-11-17 01:36:34.886001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.911640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.911692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:26.488 [2024-11-17 01:36:34.911705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:16:26.488 [2024-11-17 01:36:34.911712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.488 [2024-11-17 01:36:34.937059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.488 [2024-11-17 01:36:34.937103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:26.488 [2024-11-17 01:36:34.937115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.299 ms 00:16:26.488 [2024-11-17 01:36:34.937122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.750 [2024-11-17 01:36:34.961367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.750 [2024-11-17 01:36:34.961414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:26.750 [2024-11-17 01:36:34.961425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.199 ms 00:16:26.750 [2024-11-17 01:36:34.961433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.750 [2024-11-17 01:36:34.985649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.750 [2024-11-17 01:36:34.985697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:26.750 [2024-11-17 01:36:34.985708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.140 ms 00:16:26.750 [2024-11-17 01:36:34.985715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.750 [2024-11-17 01:36:34.985762] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:26.750 [2024-11-17 01:36:34.985779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.985993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:26.750 [2024-11-17 01:36:34.986071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:26.751 [2024-11-17 01:36:34.986601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:26.751 [2024-11-17 01:36:34.986609] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:26.751 [2024-11-17 01:36:34.986617] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:26.751 [2024-11-17 01:36:34.986625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:26.751 [2024-11-17 01:36:34.986633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:26.751 [2024-11-17 01:36:34.986642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:26.751 [2024-11-17 01:36:34.986649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:26.751 [2024-11-17 01:36:34.986657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:26.751 [2024-11-17 01:36:34.986664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:26.751 [2024-11-17 01:36:34.986671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:26.751 [2024-11-17 01:36:34.986677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:26.751 [2024-11-17 01:36:34.986684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.751 [2024-11-17 01:36:34.986695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:26.751 [2024-11-17 01:36:34.986706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:16:26.751 [2024-11-17 01:36:34.986713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.751 [2024-11-17 01:36:35.000097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.751 [2024-11-17 01:36:35.000142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:26.751 [2024-11-17 01:36:35.000154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.351 ms 00:16:26.751 [2024-11-17 01:36:35.000162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.751 [2024-11-17 01:36:35.000567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.751 [2024-11-17 01:36:35.000591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:26.751 [2024-11-17 01:36:35.000601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:16:26.751 [2024-11-17 01:36:35.000609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.751 [2024-11-17 01:36:35.039149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.751 [2024-11-17 01:36:35.039199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:26.751 [2024-11-17 01:36:35.039211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.751 [2024-11-17 01:36:35.039219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.751 [2024-11-17 01:36:35.039308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.751 [2024-11-17 01:36:35.039318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:26.751 [2024-11-17 01:36:35.039326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.751 [2024-11-17 01:36:35.039334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.751 [2024-11-17 01:36:35.039390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.039402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:26.752 [2024-11-17 01:36:35.039410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.039418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.039435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.039447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:26.752 [2024-11-17 01:36:35.039455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.039462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.123131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.123189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:26.752 [2024-11-17 01:36:35.123202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.123211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:26.752 [2024-11-17 01:36:35.191468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.752 [2024-11-17 01:36:35.191550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.752 [2024-11-17 01:36:35.191612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.752 [2024-11-17 01:36:35.191756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:26.752 [2024-11-17 01:36:35.191852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.752 [2024-11-17 01:36:35.191927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.191935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.191982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:26.752 [2024-11-17 01:36:35.191992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.752 [2024-11-17 01:36:35.192004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:26.752 [2024-11-17 01:36:35.192012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.752 [2024-11-17 01:36:35.192165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.422 ms, result 0 00:16:27.694 00:16:27.694 00:16:27.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.694 01:36:35 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73877 00:16:27.694 01:36:35 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73877 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73877 ']' 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.694 01:36:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:27.694 01:36:35 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:27.694 [2024-11-17 01:36:36.029691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:27.694 [2024-11-17 01:36:36.029868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73877 ] 00:16:27.955 [2024-11-17 01:36:36.186876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.955 [2024-11-17 01:36:36.304111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.896 01:36:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.896 01:36:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:28.896 01:36:36 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:28.896 [2024-11-17 01:36:37.202552] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:28.896 [2024-11-17 01:36:37.202629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:29.157 [2024-11-17 01:36:37.381358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.157 [2024-11-17 01:36:37.381418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.157 [2024-11-17 01:36:37.381435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:29.157 [2024-11-17 01:36:37.381443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.157 [2024-11-17 01:36:37.384437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.157 [2024-11-17 01:36:37.384488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.157 [2024-11-17 01:36:37.384501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:16:29.158 [2024-11-17 01:36:37.384509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.384623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.158 [2024-11-17 01:36:37.385369] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.158 [2024-11-17 01:36:37.385404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.385412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.158 [2024-11-17 01:36:37.385424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:16:29.158 [2024-11-17 01:36:37.385432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.387236] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:29.158 [2024-11-17 01:36:37.401292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.401347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:29.158 [2024-11-17 01:36:37.401361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.064 ms 00:16:29.158 [2024-11-17 01:36:37.401372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.401482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.401498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:29.158 [2024-11-17 01:36:37.401508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:29.158 [2024-11-17 01:36:37.401519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.409455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.409506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.158 [2024-11-17 01:36:37.409517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.881 ms 00:16:29.158 [2024-11-17 01:36:37.409527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.409642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.409655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.158 [2024-11-17 01:36:37.409666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:29.158 [2024-11-17 01:36:37.409677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.409710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.409722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.158 [2024-11-17 01:36:37.409731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:29.158 [2024-11-17 01:36:37.409740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.409763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:29.158 [2024-11-17 01:36:37.413861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.413906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.158 [2024-11-17 01:36:37.413919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:16:29.158 [2024-11-17 01:36:37.413927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.414004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.414015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.158 [2024-11-17 01:36:37.414026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:29.158 [2024-11-17 01:36:37.414037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.414061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:29.158 [2024-11-17 01:36:37.414082] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:29.158 [2024-11-17 01:36:37.414127] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:29.158 [2024-11-17 01:36:37.414143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:29.158 [2024-11-17 01:36:37.414253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:29.158 [2024-11-17 01:36:37.414325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.158 [2024-11-17 01:36:37.414342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:29.158 [2024-11-17 01:36:37.414356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414368] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:29.158 [2024-11-17 01:36:37.414391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.158 [2024-11-17 01:36:37.414400] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:29.158 [2024-11-17 01:36:37.414424] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:29.158 [2024-11-17 01:36:37.414434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.414444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.158 [2024-11-17 01:36:37.414453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:16:29.158 [2024-11-17 01:36:37.414462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.414552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.158 [2024-11-17 01:36:37.414574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.158 [2024-11-17 01:36:37.414588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:29.158 [2024-11-17 01:36:37.414599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.158 [2024-11-17 01:36:37.414705] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.158 [2024-11-17 01:36:37.414727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.158 [2024-11-17 01:36:37.414738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.158 [2024-11-17 01:36:37.414771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.158 [2024-11-17 01:36:37.414824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.158 [2024-11-17 01:36:37.414843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.158 [2024-11-17 01:36:37.414852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:29.158 [2024-11-17 01:36:37.414860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.158 [2024-11-17 01:36:37.414870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.158 [2024-11-17 01:36:37.414881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:29.158 [2024-11-17 01:36:37.414892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.158 [2024-11-17 01:36:37.414910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.158 [2024-11-17 01:36:37.414943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.158 [2024-11-17 01:36:37.414968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:29.158 [2024-11-17 01:36:37.414976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.158 [2024-11-17 01:36:37.414984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.158 [2024-11-17 01:36:37.414993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:29.158 [2024-11-17 01:36:37.415002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.158 [2024-11-17 01:36:37.415010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.158 [2024-11-17 01:36:37.415020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:29.158 [2024-11-17 01:36:37.415027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.158 [2024-11-17 01:36:37.415036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.158 [2024-11-17 01:36:37.415043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:29.158 [2024-11-17 01:36:37.415053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.158 [2024-11-17 01:36:37.415060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.158 [2024-11-17 01:36:37.415069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:29.158 [2024-11-17 01:36:37.415075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.158 [2024-11-17 01:36:37.415084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:29.158 [2024-11-17 01:36:37.415091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:29.159 [2024-11-17 01:36:37.415102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.159 [2024-11-17 01:36:37.415109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:29.159 [2024-11-17 01:36:37.415117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:29.159 [2024-11-17 01:36:37.415124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.159 [2024-11-17 01:36:37.415133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.159 [2024-11-17 01:36:37.415141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.159 [2024-11-17 01:36:37.415152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.159 [2024-11-17 01:36:37.415161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.159 [2024-11-17 01:36:37.415172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.159 [2024-11-17 01:36:37.415179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.159 [2024-11-17 01:36:37.415188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.159 [2024-11-17 01:36:37.415196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.159 [2024-11-17 01:36:37.415204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.159 [2024-11-17 01:36:37.415211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.159 [2024-11-17 01:36:37.415222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.159 [2024-11-17 01:36:37.415233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:29.159 [2024-11-17 01:36:37.415253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:29.159 [2024-11-17 01:36:37.415262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:29.159 [2024-11-17 01:36:37.415269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:29.159 [2024-11-17 01:36:37.415279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:29.159 [2024-11-17 01:36:37.415286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:29.159 [2024-11-17 01:36:37.415296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:29.159 [2024-11-17 01:36:37.415302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:29.159 [2024-11-17 01:36:37.415311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:29.159 [2024-11-17 01:36:37.415318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:29.159 [2024-11-17 01:36:37.415361] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.159 [2024-11-17 01:36:37.415369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.159 [2024-11-17 01:36:37.415389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.159 [2024-11-17 01:36:37.415398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.159 [2024-11-17 01:36:37.415405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.159 [2024-11-17 01:36:37.415415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.415422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.159 [2024-11-17 01:36:37.415432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:16:29.159 [2024-11-17 01:36:37.415440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.447083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.447130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.159 [2024-11-17 01:36:37.447144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:16:29.159 [2024-11-17 01:36:37.447154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.447287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.447298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:29.159 [2024-11-17 01:36:37.447309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:29.159 [2024-11-17 01:36:37.447319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.482105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.482149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.159 [2024-11-17 01:36:37.482166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.760 ms 00:16:29.159 [2024-11-17 01:36:37.482175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.482258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.482269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.159 [2024-11-17 01:36:37.482280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:29.159 [2024-11-17 01:36:37.482288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.482839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.482871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.159 [2024-11-17 01:36:37.482887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:16:29.159 [2024-11-17 01:36:37.482896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.483050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.483062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.159 [2024-11-17 01:36:37.483076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:29.159 [2024-11-17 01:36:37.483086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.500779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.500835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.159 [2024-11-17 01:36:37.500849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.666 ms 00:16:29.159 [2024-11-17 01:36:37.500857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.514934] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:29.159 [2024-11-17 01:36:37.514983] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:29.159 [2024-11-17 01:36:37.514999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.515008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:29.159 [2024-11-17 01:36:37.515020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:16:29.159 [2024-11-17 01:36:37.515028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.540845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.540895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:29.159 [2024-11-17 01:36:37.540911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.720 ms 00:16:29.159 [2024-11-17 01:36:37.540919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.554065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.554125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:29.159 [2024-11-17 01:36:37.554143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:16:29.159 [2024-11-17 01:36:37.554151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.566765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.566822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:29.159 [2024-11-17 01:36:37.566837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:16:29.159 [2024-11-17 01:36:37.566846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.159 [2024-11-17 01:36:37.567494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.159 [2024-11-17 01:36:37.567527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:29.159 [2024-11-17 01:36:37.567540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:16:29.159 [2024-11-17 01:36:37.567548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.419 [2024-11-17 01:36:37.643976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.419 [2024-11-17 01:36:37.644041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:29.419 [2024-11-17 01:36:37.644060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.397 ms 00:16:29.419 [2024-11-17 01:36:37.644070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.419 [2024-11-17 01:36:37.655094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:29.419 [2024-11-17 01:36:37.674080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.419 [2024-11-17 01:36:37.674138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:29.419 [2024-11-17 01:36:37.674154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.909 ms 00:16:29.419 [2024-11-17 01:36:37.674165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.419 [2024-11-17 01:36:37.674250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.419 [2024-11-17 01:36:37.674263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:29.419 [2024-11-17 01:36:37.674273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:29.419 [2024-11-17 01:36:37.674283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.419 [2024-11-17 01:36:37.674339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.419 [2024-11-17 01:36:37.674354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:29.419 [2024-11-17 01:36:37.674364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:29.419 [2024-11-17 01:36:37.674375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.419 [2024-11-17 01:36:37.674403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.419 [2024-11-17 01:36:37.674414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:29.419 [2024-11-17 01:36:37.674422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:29.420 [2024-11-17 01:36:37.674434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.420 [2024-11-17 01:36:37.674468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:29.420 [2024-11-17 01:36:37.674484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.420 [2024-11-17 01:36:37.674492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:29.420 [2024-11-17 01:36:37.674505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:29.420 [2024-11-17 01:36:37.674514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.420 [2024-11-17 01:36:37.700754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.420 [2024-11-17 01:36:37.700814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:29.420 [2024-11-17 01:36:37.700831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.208 ms 00:16:29.420 [2024-11-17 01:36:37.700840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.420 [2024-11-17 01:36:37.700955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.420 [2024-11-17 01:36:37.700966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:29.420 [2024-11-17 01:36:37.700978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:29.420 [2024-11-17 01:36:37.700992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.420 [2024-11-17 01:36:37.702246] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.420 [2024-11-17 01:36:37.705759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.538 ms, result 0 00:16:29.420 [2024-11-17 01:36:37.707976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:29.420 Some configs were skipped because the RPC state that can call them passed over. 00:16:29.420 01:36:37 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:29.681 [2024-11-17 01:36:37.948906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.681 [2024-11-17 01:36:37.948974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:29.681 [2024-11-17 01:36:37.948987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:16:29.681 [2024-11-17 01:36:37.948998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.681 [2024-11-17 01:36:37.949034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.515 ms, result 0 00:16:29.681 true 00:16:29.681 01:36:37 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:30.040 [2024-11-17 01:36:38.160526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.040 [2024-11-17 01:36:38.160583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:30.040 [2024-11-17 01:36:38.160598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:16:30.040 [2024-11-17 01:36:38.160606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.040 [2024-11-17 01:36:38.160645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.861 ms, result 0 00:16:30.040 true 00:16:30.040 01:36:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73877 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73877 ']' 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73877 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73877 00:16:30.040 killing process with pid 73877 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73877' 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73877 00:16:30.040 01:36:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73877 00:16:30.639 [2024-11-17 01:36:38.872083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.872132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:30.639 [2024-11-17 01:36:38.872142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:30.639 [2024-11-17 01:36:38.872149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.872167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:30.639 [2024-11-17 01:36:38.874222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.874249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:30.639 [2024-11-17 01:36:38.874260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:16:30.639 [2024-11-17 01:36:38.874266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.874486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.874494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:30.639 [2024-11-17 01:36:38.874501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:16:30.639 [2024-11-17 01:36:38.874507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.878157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.878182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:30.639 [2024-11-17 01:36:38.878192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:16:30.639 [2024-11-17 01:36:38.878198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.883416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.883439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:30.639 [2024-11-17 01:36:38.883448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:16:30.639 [2024-11-17 01:36:38.883454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.891339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.891365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:30.639 [2024-11-17 01:36:38.891375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:16:30.639 [2024-11-17 01:36:38.891385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.898434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.898459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:30.639 [2024-11-17 01:36:38.898469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.018 ms 00:16:30.639 [2024-11-17 01:36:38.898477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.898580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.898588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:30.639 [2024-11-17 01:36:38.898596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:30.639 [2024-11-17 01:36:38.898602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.907212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.907242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:30.639 [2024-11-17 01:36:38.907250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.594 ms 00:16:30.639 [2024-11-17 01:36:38.907256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.915373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.915396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:30.639 [2024-11-17 01:36:38.915407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.088 ms 00:16:30.639 [2024-11-17 01:36:38.915412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.923217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.923241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:30.639 [2024-11-17 01:36:38.923251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.774 ms 00:16:30.639 [2024-11-17 01:36:38.923257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.930891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.639 [2024-11-17 01:36:38.930914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:30.639 [2024-11-17 01:36:38.930923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.574 ms 00:16:30.639 [2024-11-17 01:36:38.930928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.639 [2024-11-17 01:36:38.930955] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:30.639 [2024-11-17 01:36:38.930965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.930974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.930980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.930987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.930992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:30.639 [2024-11-17 01:36:38.931185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:30.640 [2024-11-17 01:36:38.931631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:30.640 [2024-11-17 01:36:38.931642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:30.640 [2024-11-17 01:36:38.931651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:30.640 [2024-11-17 01:36:38.931659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:30.640 [2024-11-17 01:36:38.931664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:30.640 [2024-11-17 01:36:38.931671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:30.640 [2024-11-17 01:36:38.931690] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:30.640 [2024-11-17 01:36:38.931697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:30.640 [2024-11-17 01:36:38.931703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:30.640 [2024-11-17 01:36:38.931709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:30.640 [2024-11-17 01:36:38.931714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:30.640 [2024-11-17 01:36:38.931721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.640 [2024-11-17 01:36:38.931727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:30.640 [2024-11-17 01:36:38.931735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:16:30.640 [2024-11-17 01:36:38.931740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.640 [2024-11-17 01:36:38.941149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.640 [2024-11-17 01:36:38.941171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:30.640 [2024-11-17 01:36:38.941182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.389 ms 00:16:30.640 [2024-11-17 01:36:38.941187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.640 [2024-11-17 01:36:38.941474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.640 [2024-11-17 01:36:38.941521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:30.640 [2024-11-17 01:36:38.941534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:16:30.640 [2024-11-17 01:36:38.941542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.640 [2024-11-17 01:36:38.976460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.640 [2024-11-17 01:36:38.976484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.640 [2024-11-17 01:36:38.976493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.640 [2024-11-17 01:36:38.976500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.640 [2024-11-17 01:36:38.977438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.640 [2024-11-17 01:36:38.977460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.641 [2024-11-17 01:36:38.977469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:38.977476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:38.977514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:38.977521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.641 [2024-11-17 01:36:38.977530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:38.977536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:38.977550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:38.977556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.641 [2024-11-17 01:36:38.977563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:38.977569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.037261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.037289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.641 [2024-11-17 01:36:39.037298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.037305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.641 [2024-11-17 01:36:39.086145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.641 [2024-11-17 01:36:39.086226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.641 [2024-11-17 01:36:39.086268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.641 [2024-11-17 01:36:39.086359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:30.641 [2024-11-17 01:36:39.086404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.641 [2024-11-17 01:36:39.086455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.641 [2024-11-17 01:36:39.086503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.641 [2024-11-17 01:36:39.086512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.641 [2024-11-17 01:36:39.086517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.641 [2024-11-17 01:36:39.086617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 214.519 ms, result 0 00:16:31.226 01:36:39 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:31.226 01:36:39 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.226 [2024-11-17 01:36:39.654958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:31.226 [2024-11-17 01:36:39.655083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73930 ] 00:16:31.486 [2024-11-17 01:36:39.812872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.486 [2024-11-17 01:36:39.886623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.746 [2024-11-17 01:36:40.094500] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:31.746 [2024-11-17 01:36:40.094545] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:32.007 [2024-11-17 01:36:40.246389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.246421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:32.007 [2024-11-17 01:36:40.246431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:32.007 [2024-11-17 01:36:40.246438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.248495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.248519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.007 [2024-11-17 01:36:40.248527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:16:32.007 [2024-11-17 01:36:40.248532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.248588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:32.007 [2024-11-17 01:36:40.249105] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:32.007 [2024-11-17 01:36:40.249117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.249123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.007 [2024-11-17 01:36:40.249130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:16:32.007 [2024-11-17 01:36:40.249137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.250080] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:32.007 [2024-11-17 01:36:40.260050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.260083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:32.007 [2024-11-17 01:36:40.260092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.971 ms 00:16:32.007 [2024-11-17 01:36:40.260098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.260170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.260178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:32.007 [2024-11-17 01:36:40.260185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:32.007 [2024-11-17 01:36:40.260190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.264419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.264439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.007 [2024-11-17 01:36:40.264446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:16:32.007 [2024-11-17 01:36:40.264453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.264522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.264529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.007 [2024-11-17 01:36:40.264536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:32.007 [2024-11-17 01:36:40.264542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.264558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.264567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:32.007 [2024-11-17 01:36:40.264573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:32.007 [2024-11-17 01:36:40.264578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.264596] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:32.007 [2024-11-17 01:36:40.267223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.267241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.007 [2024-11-17 01:36:40.267248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:16:32.007 [2024-11-17 01:36:40.267254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.007 [2024-11-17 01:36:40.267280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.007 [2024-11-17 01:36:40.267287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:32.007 [2024-11-17 01:36:40.267294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:32.008 [2024-11-17 01:36:40.267300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.008 [2024-11-17 01:36:40.267313] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:32.008 [2024-11-17 01:36:40.267329] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:32.008 [2024-11-17 01:36:40.267355] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:32.008 [2024-11-17 01:36:40.267367] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:32.008 [2024-11-17 01:36:40.267445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:32.008 [2024-11-17 01:36:40.267453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:32.008 [2024-11-17 01:36:40.267461] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:32.008 [2024-11-17 01:36:40.267469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267477] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:32.008 [2024-11-17 01:36:40.267489] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:32.008 [2024-11-17 01:36:40.267495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:32.008 [2024-11-17 01:36:40.267501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:32.008 [2024-11-17 01:36:40.267508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.008 [2024-11-17 01:36:40.267513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:32.008 [2024-11-17 01:36:40.267519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:32.008 [2024-11-17 01:36:40.267524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.008 [2024-11-17 01:36:40.267590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.008 [2024-11-17 01:36:40.267598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:32.008 [2024-11-17 01:36:40.267606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:32.008 [2024-11-17 01:36:40.267611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.008 [2024-11-17 01:36:40.267699] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:32.008 [2024-11-17 01:36:40.267707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:32.008 [2024-11-17 01:36:40.267714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:32.008 [2024-11-17 01:36:40.267732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:32.008 [2024-11-17 01:36:40.267750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.008 [2024-11-17 01:36:40.267760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:32.008 [2024-11-17 01:36:40.267767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:32.008 [2024-11-17 01:36:40.267772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.008 [2024-11-17 01:36:40.267782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:32.008 [2024-11-17 01:36:40.267797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:32.008 [2024-11-17 01:36:40.267802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:32.008 [2024-11-17 01:36:40.267813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:32.008 [2024-11-17 01:36:40.267828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:32.008 [2024-11-17 01:36:40.267845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:32.008 [2024-11-17 01:36:40.267860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:32.008 [2024-11-17 01:36:40.267875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:32.008 [2024-11-17 01:36:40.267890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.008 [2024-11-17 01:36:40.267900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:32.008 [2024-11-17 01:36:40.267905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:32.008 [2024-11-17 01:36:40.267911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.008 [2024-11-17 01:36:40.267916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:32.008 [2024-11-17 01:36:40.267921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:32.008 [2024-11-17 01:36:40.267926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:32.008 [2024-11-17 01:36:40.267936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:32.008 [2024-11-17 01:36:40.267942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:32.008 [2024-11-17 01:36:40.267954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:32.008 [2024-11-17 01:36:40.267960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.008 [2024-11-17 01:36:40.267967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.008 [2024-11-17 01:36:40.267973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:32.008 [2024-11-17 01:36:40.267978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:32.008 [2024-11-17 01:36:40.267984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:32.008 [2024-11-17 01:36:40.267989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:32.008 [2024-11-17 01:36:40.267994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:32.008 [2024-11-17 01:36:40.267998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:32.008 [2024-11-17 01:36:40.268004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:32.008 [2024-11-17 01:36:40.268011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:32.008 [2024-11-17 01:36:40.268024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:32.008 [2024-11-17 01:36:40.268029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:32.008 [2024-11-17 01:36:40.268034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:32.008 [2024-11-17 01:36:40.268040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:32.008 [2024-11-17 01:36:40.268045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:32.008 [2024-11-17 01:36:40.268050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:32.008 [2024-11-17 01:36:40.268057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:32.008 [2024-11-17 01:36:40.268062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:32.008 [2024-11-17 01:36:40.268068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:32.008 [2024-11-17 01:36:40.268094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:32.008 [2024-11-17 01:36:40.268101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:32.008 [2024-11-17 01:36:40.268113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:32.008 [2024-11-17 01:36:40.268118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:32.008 [2024-11-17 01:36:40.268124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:32.009 [2024-11-17 01:36:40.268131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.268138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:32.009 [2024-11-17 01:36:40.268146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:16:32.009 [2024-11-17 01:36:40.268151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.288669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.288692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.009 [2024-11-17 01:36:40.288700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.480 ms 00:16:32.009 [2024-11-17 01:36:40.288706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.288820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.288832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:32.009 [2024-11-17 01:36:40.288838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:32.009 [2024-11-17 01:36:40.288845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.332139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.332166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.009 [2024-11-17 01:36:40.332175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.277 ms 00:16:32.009 [2024-11-17 01:36:40.332184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.332244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.332252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.009 [2024-11-17 01:36:40.332259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:32.009 [2024-11-17 01:36:40.332265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.332569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.332589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.009 [2024-11-17 01:36:40.332597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:32.009 [2024-11-17 01:36:40.332604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.332704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.332713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.009 [2024-11-17 01:36:40.332719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:32.009 [2024-11-17 01:36:40.332725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.343379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.343400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.009 [2024-11-17 01:36:40.343408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.638 ms 00:16:32.009 [2024-11-17 01:36:40.343414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.353481] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:32.009 [2024-11-17 01:36:40.353505] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:32.009 [2024-11-17 01:36:40.353514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.353520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:32.009 [2024-11-17 01:36:40.353527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.014 ms 00:16:32.009 [2024-11-17 01:36:40.353532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.371986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.372025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:32.009 [2024-11-17 01:36:40.372033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:16:32.009 [2024-11-17 01:36:40.372039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.380973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.380995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:32.009 [2024-11-17 01:36:40.381002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.892 ms 00:16:32.009 [2024-11-17 01:36:40.381007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.389971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.389991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:32.009 [2024-11-17 01:36:40.389998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.924 ms 00:16:32.009 [2024-11-17 01:36:40.390003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.390455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.390471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:32.009 [2024-11-17 01:36:40.390478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:16:32.009 [2024-11-17 01:36:40.390484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.434094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.434124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:32.009 [2024-11-17 01:36:40.434135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.593 ms 00:16:32.009 [2024-11-17 01:36:40.434141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.441916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:32.009 [2024-11-17 01:36:40.453222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.453248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:32.009 [2024-11-17 01:36:40.453257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.019 ms 00:16:32.009 [2024-11-17 01:36:40.453268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.453334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.453343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:32.009 [2024-11-17 01:36:40.453350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:32.009 [2024-11-17 01:36:40.453355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.453391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.453398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:32.009 [2024-11-17 01:36:40.453404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:32.009 [2024-11-17 01:36:40.453412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.453433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.453440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:32.009 [2024-11-17 01:36:40.453447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:32.009 [2024-11-17 01:36:40.453452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.009 [2024-11-17 01:36:40.453474] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:32.009 [2024-11-17 01:36:40.453481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.009 [2024-11-17 01:36:40.453488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:32.009 [2024-11-17 01:36:40.453493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:32.009 [2024-11-17 01:36:40.453503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.270 [2024-11-17 01:36:40.471828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.270 [2024-11-17 01:36:40.471850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:32.271 [2024-11-17 01:36:40.471858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.309 ms 00:16:32.271 [2024-11-17 01:36:40.471865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.271 [2024-11-17 01:36:40.471935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.271 [2024-11-17 01:36:40.471943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:32.271 [2024-11-17 01:36:40.471950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:32.271 [2024-11-17 01:36:40.471956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.271 [2024-11-17 01:36:40.472672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:32.271 [2024-11-17 01:36:40.474892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.063 ms, result 0 00:16:32.271 [2024-11-17 01:36:40.475506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:32.271 [2024-11-17 01:36:40.490415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.213  [2024-11-17T01:36:42.612Z] Copying: 20/256 [MB] (20 MBps) [2024-11-17T01:36:43.555Z] Copying: 33/256 [MB] (12 MBps) [2024-11-17T01:36:44.499Z] Copying: 52/256 [MB] (19 MBps) [2024-11-17T01:36:45.886Z] Copying: 63/256 [MB] (11 MBps) [2024-11-17T01:36:46.830Z] Copying: 76/256 [MB] (12 MBps) [2024-11-17T01:36:47.775Z] Copying: 88/256 [MB] (12 MBps) [2024-11-17T01:36:48.720Z] Copying: 106/256 [MB] (17 MBps) [2024-11-17T01:36:49.668Z] Copying: 119/256 [MB] (12 MBps) [2024-11-17T01:36:50.609Z] Copying: 132/256 [MB] (13 MBps) [2024-11-17T01:36:51.553Z] Copying: 146/256 [MB] (13 MBps) [2024-11-17T01:36:52.497Z] Copying: 164/256 [MB] (17 MBps) [2024-11-17T01:36:53.881Z] Copying: 177/256 [MB] (13 MBps) [2024-11-17T01:36:54.823Z] Copying: 189/256 [MB] (11 MBps) [2024-11-17T01:36:55.768Z] Copying: 203/256 [MB] (13 MBps) [2024-11-17T01:36:56.712Z] Copying: 216/256 [MB] (12 MBps) [2024-11-17T01:36:57.656Z] Copying: 229/256 [MB] (12 MBps) [2024-11-17T01:36:58.229Z] Copying: 243/256 [MB] (13 MBps) [2024-11-17T01:36:58.229Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-17 01:36:58.178710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.770 [2024-11-17 01:36:58.188968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.770 [2024-11-17 01:36:58.189020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.770 [2024-11-17 01:36:58.189036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.770 [2024-11-17 01:36:58.189053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.770 [2024-11-17 01:36:58.189076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.770 [2024-11-17 01:36:58.192086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.770 [2024-11-17 01:36:58.192132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.770 [2024-11-17 01:36:58.192143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:16:49.770 [2024-11-17 01:36:58.192152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.770 [2024-11-17 01:36:58.192416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.770 [2024-11-17 01:36:58.192434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.771 [2024-11-17 01:36:58.192445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:16:49.771 [2024-11-17 01:36:58.192454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.771 [2024-11-17 01:36:58.196172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.771 [2024-11-17 01:36:58.196205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.771 [2024-11-17 01:36:58.196214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:16:49.771 [2024-11-17 01:36:58.196222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.771 [2024-11-17 01:36:58.203094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.771 [2024-11-17 01:36:58.203133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.771 [2024-11-17 01:36:58.203144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:16:49.771 [2024-11-17 01:36:58.203152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.228943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.228993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:50.034 [2024-11-17 01:36:58.229005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.726 ms 00:16:50.034 [2024-11-17 01:36:58.229013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.246105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.246154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:50.034 [2024-11-17 01:36:58.246173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:16:50.034 [2024-11-17 01:36:58.246181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.246313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.246325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:50.034 [2024-11-17 01:36:58.246334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:50.034 [2024-11-17 01:36:58.246342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.272354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.272401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:50.034 [2024-11-17 01:36:58.272413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.983 ms 00:16:50.034 [2024-11-17 01:36:58.272421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.298022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.298068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:50.034 [2024-11-17 01:36:58.298079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:16:50.034 [2024-11-17 01:36:58.298087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.034 [2024-11-17 01:36:58.322900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.034 [2024-11-17 01:36:58.322949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:50.034 [2024-11-17 01:36:58.322961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:16:50.035 [2024-11-17 01:36:58.322968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.035 [2024-11-17 01:36:58.347820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.035 [2024-11-17 01:36:58.347865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:50.035 [2024-11-17 01:36:58.347877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.775 ms 00:16:50.035 [2024-11-17 01:36:58.347885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.035 [2024-11-17 01:36:58.347931] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:50.035 [2024-11-17 01:36:58.347946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.347997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:50.035 [2024-11-17 01:36:58.348598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:50.036 [2024-11-17 01:36:58.348755] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:50.036 [2024-11-17 01:36:58.348764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:50.036 [2024-11-17 01:36:58.348773] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:50.036 [2024-11-17 01:36:58.348780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:50.036 [2024-11-17 01:36:58.348801] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:50.036 [2024-11-17 01:36:58.348810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:50.036 [2024-11-17 01:36:58.348818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:50.036 [2024-11-17 01:36:58.348827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:50.036 [2024-11-17 01:36:58.348839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:50.036 [2024-11-17 01:36:58.348845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:50.036 [2024-11-17 01:36:58.348852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:50.036 [2024-11-17 01:36:58.348860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.036 [2024-11-17 01:36:58.348869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:50.036 [2024-11-17 01:36:58.348877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:16:50.036 [2024-11-17 01:36:58.348885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.362348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.036 [2024-11-17 01:36:58.362390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:50.036 [2024-11-17 01:36:58.362402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.431 ms 00:16:50.036 [2024-11-17 01:36:58.362410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.362862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.036 [2024-11-17 01:36:58.362883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:50.036 [2024-11-17 01:36:58.362894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:16:50.036 [2024-11-17 01:36:58.362902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.401754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.036 [2024-11-17 01:36:58.401820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:50.036 [2024-11-17 01:36:58.401832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.036 [2024-11-17 01:36:58.401841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.401927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.036 [2024-11-17 01:36:58.401938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:50.036 [2024-11-17 01:36:58.401946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.036 [2024-11-17 01:36:58.401954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.402012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.036 [2024-11-17 01:36:58.402024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:50.036 [2024-11-17 01:36:58.402033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.036 [2024-11-17 01:36:58.402040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.402062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.036 [2024-11-17 01:36:58.402071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:50.036 [2024-11-17 01:36:58.402079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.036 [2024-11-17 01:36:58.402087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.036 [2024-11-17 01:36:58.487002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.036 [2024-11-17 01:36:58.487058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.036 [2024-11-17 01:36:58.487071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.036 [2024-11-17 01:36:58.487080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.298 [2024-11-17 01:36:58.556233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.298 [2024-11-17 01:36:58.556326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.298 [2024-11-17 01:36:58.556392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.298 [2024-11-17 01:36:58.556518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:50.298 [2024-11-17 01:36:58.556581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.298 [2024-11-17 01:36:58.556653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.298 [2024-11-17 01:36:58.556725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.298 [2024-11-17 01:36:58.556734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.298 [2024-11-17 01:36:58.556742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.298 [2024-11-17 01:36:58.556921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.941 ms, result 0 00:16:50.869 00:16:50.869 00:16:50.869 01:36:59 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:50.869 01:36:59 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:51.441 01:36:59 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.702 [2024-11-17 01:36:59.949491] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:51.702 [2024-11-17 01:36:59.949655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74139 ] 00:16:51.702 [2024-11-17 01:37:00.115956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.963 [2024-11-17 01:37:00.235718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.224 [2024-11-17 01:37:00.528477] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.224 [2024-11-17 01:37:00.528549] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.486 [2024-11-17 01:37:00.691379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.691442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:52.486 [2024-11-17 01:37:00.691457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.486 [2024-11-17 01:37:00.691466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.694432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.694484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.486 [2024-11-17 01:37:00.694495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:16:52.486 [2024-11-17 01:37:00.694503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.694618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:52.486 [2024-11-17 01:37:00.695340] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:52.486 [2024-11-17 01:37:00.695620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.695639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.486 [2024-11-17 01:37:00.695652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:16:52.486 [2024-11-17 01:37:00.695660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.697603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:52.486 [2024-11-17 01:37:00.712206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.712259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:52.486 [2024-11-17 01:37:00.712274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.605 ms 00:16:52.486 [2024-11-17 01:37:00.712283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.712400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.712413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:52.486 [2024-11-17 01:37:00.712424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:52.486 [2024-11-17 01:37:00.712432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.720408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.720448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.486 [2024-11-17 01:37:00.720460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.932 ms 00:16:52.486 [2024-11-17 01:37:00.720468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.720573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.720583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.486 [2024-11-17 01:37:00.720592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:52.486 [2024-11-17 01:37:00.720601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.720628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.720642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:52.486 [2024-11-17 01:37:00.720652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:52.486 [2024-11-17 01:37:00.720660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.720682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:52.486 [2024-11-17 01:37:00.724649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.724690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.486 [2024-11-17 01:37:00.724701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:16:52.486 [2024-11-17 01:37:00.724710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.724808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.724820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:52.486 [2024-11-17 01:37:00.724830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:52.486 [2024-11-17 01:37:00.724839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.724863] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:52.486 [2024-11-17 01:37:00.724890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:52.486 [2024-11-17 01:37:00.724931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:52.486 [2024-11-17 01:37:00.724948] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:52.486 [2024-11-17 01:37:00.725057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:52.486 [2024-11-17 01:37:00.725071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:52.486 [2024-11-17 01:37:00.725083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:52.486 [2024-11-17 01:37:00.725094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725107] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:52.486 [2024-11-17 01:37:00.725127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:52.486 [2024-11-17 01:37:00.725135] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:52.486 [2024-11-17 01:37:00.725143] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:52.486 [2024-11-17 01:37:00.725152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.725161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:52.486 [2024-11-17 01:37:00.725169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:16:52.486 [2024-11-17 01:37:00.725176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.725265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.486 [2024-11-17 01:37:00.725276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:52.486 [2024-11-17 01:37:00.725287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:52.486 [2024-11-17 01:37:00.725295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.486 [2024-11-17 01:37:00.725395] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:52.486 [2024-11-17 01:37:00.725417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:52.486 [2024-11-17 01:37:00.725428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:52.486 [2024-11-17 01:37:00.725454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:52.486 [2024-11-17 01:37:00.725477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.486 [2024-11-17 01:37:00.725494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:52.486 [2024-11-17 01:37:00.725501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:52.486 [2024-11-17 01:37:00.725509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.486 [2024-11-17 01:37:00.725526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:52.486 [2024-11-17 01:37:00.725533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:52.486 [2024-11-17 01:37:00.725540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:52.486 [2024-11-17 01:37:00.725554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:52.486 [2024-11-17 01:37:00.725575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:52.486 [2024-11-17 01:37:00.725597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:52.486 [2024-11-17 01:37:00.725617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:52.486 [2024-11-17 01:37:00.725637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.486 [2024-11-17 01:37:00.725652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:52.486 [2024-11-17 01:37:00.725660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:52.486 [2024-11-17 01:37:00.725666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.486 [2024-11-17 01:37:00.725673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:52.486 [2024-11-17 01:37:00.725679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:52.487 [2024-11-17 01:37:00.725685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.487 [2024-11-17 01:37:00.725694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:52.487 [2024-11-17 01:37:00.725702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:52.487 [2024-11-17 01:37:00.725709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.487 [2024-11-17 01:37:00.725715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:52.487 [2024-11-17 01:37:00.725723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:52.487 [2024-11-17 01:37:00.725731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.487 [2024-11-17 01:37:00.725737] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:52.487 [2024-11-17 01:37:00.725745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:52.487 [2024-11-17 01:37:00.725753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.487 [2024-11-17 01:37:00.725763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.487 [2024-11-17 01:37:00.725770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:52.487 [2024-11-17 01:37:00.725777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:52.487 [2024-11-17 01:37:00.725784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:52.487 [2024-11-17 01:37:00.725805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:52.487 [2024-11-17 01:37:00.725812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:52.487 [2024-11-17 01:37:00.725819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:52.487 [2024-11-17 01:37:00.725827] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:52.487 [2024-11-17 01:37:00.725838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:52.487 [2024-11-17 01:37:00.725854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:52.487 [2024-11-17 01:37:00.725861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:52.487 [2024-11-17 01:37:00.725869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:52.487 [2024-11-17 01:37:00.725876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:52.487 [2024-11-17 01:37:00.725883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:52.487 [2024-11-17 01:37:00.725892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:52.487 [2024-11-17 01:37:00.725901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:52.487 [2024-11-17 01:37:00.725911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:52.487 [2024-11-17 01:37:00.725919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:52.487 [2024-11-17 01:37:00.725958] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:52.487 [2024-11-17 01:37:00.725968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:52.487 [2024-11-17 01:37:00.725985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:52.487 [2024-11-17 01:37:00.725993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:52.487 [2024-11-17 01:37:00.726000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:52.487 [2024-11-17 01:37:00.726008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.726016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:52.487 [2024-11-17 01:37:00.726026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:16:52.487 [2024-11-17 01:37:00.726036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.758291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.758485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.487 [2024-11-17 01:37:00.758559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.199 ms 00:16:52.487 [2024-11-17 01:37:00.758584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.758734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.758950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:52.487 [2024-11-17 01:37:00.758982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:52.487 [2024-11-17 01:37:00.759002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.807749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.807988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.487 [2024-11-17 01:37:00.808496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.704 ms 00:16:52.487 [2024-11-17 01:37:00.808557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.808817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.808849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.487 [2024-11-17 01:37:00.808862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.487 [2024-11-17 01:37:00.808870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.809394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.809429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.487 [2024-11-17 01:37:00.809440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:16:52.487 [2024-11-17 01:37:00.809452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.809607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.809618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.487 [2024-11-17 01:37:00.809626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:52.487 [2024-11-17 01:37:00.809635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.825882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.826057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.487 [2024-11-17 01:37:00.826075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.225 ms 00:16:52.487 [2024-11-17 01:37:00.826084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.840363] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:52.487 [2024-11-17 01:37:00.840410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:52.487 [2024-11-17 01:37:00.840424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.840433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:52.487 [2024-11-17 01:37:00.840443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.219 ms 00:16:52.487 [2024-11-17 01:37:00.840451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.866554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.866609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:52.487 [2024-11-17 01:37:00.866622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.009 ms 00:16:52.487 [2024-11-17 01:37:00.866631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.879689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.879733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:52.487 [2024-11-17 01:37:00.879745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.968 ms 00:16:52.487 [2024-11-17 01:37:00.879752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.892456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.892500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:52.487 [2024-11-17 01:37:00.892512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.598 ms 00:16:52.487 [2024-11-17 01:37:00.892521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.487 [2024-11-17 01:37:00.893211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.487 [2024-11-17 01:37:00.893240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:52.487 [2024-11-17 01:37:00.893252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:16:52.487 [2024-11-17 01:37:00.893260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.749 [2024-11-17 01:37:00.961492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.749 [2024-11-17 01:37:00.961546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:52.749 [2024-11-17 01:37:00.961560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.205 ms 00:16:52.749 [2024-11-17 01:37:00.961569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.749 [2024-11-17 01:37:00.972618] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:52.749 [2024-11-17 01:37:00.991420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.749 [2024-11-17 01:37:00.991607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:52.749 [2024-11-17 01:37:00.991626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.753 ms 00:16:52.749 [2024-11-17 01:37:00.991636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.749 [2024-11-17 01:37:00.991751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.749 [2024-11-17 01:37:00.991764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:52.749 [2024-11-17 01:37:00.991774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:52.749 [2024-11-17 01:37:00.991784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.749 [2024-11-17 01:37:00.991879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.749 [2024-11-17 01:37:00.991889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:52.749 [2024-11-17 01:37:00.991898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:52.749 [2024-11-17 01:37:00.991907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.749 [2024-11-17 01:37:00.991934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.750 [2024-11-17 01:37:00.991948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:52.750 [2024-11-17 01:37:00.991957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.750 [2024-11-17 01:37:00.991965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.750 [2024-11-17 01:37:00.992003] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:52.750 [2024-11-17 01:37:00.992015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.750 [2024-11-17 01:37:00.992024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:52.750 [2024-11-17 01:37:00.992034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:52.750 [2024-11-17 01:37:00.992043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.750 [2024-11-17 01:37:01.017801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.750 [2024-11-17 01:37:01.017980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:52.750 [2024-11-17 01:37:01.018001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:16:52.750 [2024-11-17 01:37:01.018010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.750 [2024-11-17 01:37:01.018140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.750 [2024-11-17 01:37:01.018153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:52.750 [2024-11-17 01:37:01.018164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:52.750 [2024-11-17 01:37:01.018173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.750 [2024-11-17 01:37:01.019302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:52.750 [2024-11-17 01:37:01.022937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.594 ms, result 0 00:16:52.750 [2024-11-17 01:37:01.024175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:52.750 [2024-11-17 01:37:01.037881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.011  [2024-11-17T01:37:01.470Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-17 01:37:01.440103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.011 [2024-11-17 01:37:01.449190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.011 [2024-11-17 01:37:01.449399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:53.011 [2024-11-17 01:37:01.449420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:53.011 [2024-11-17 01:37:01.449438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.011 [2024-11-17 01:37:01.449467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:53.011 [2024-11-17 01:37:01.452466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.011 [2024-11-17 01:37:01.452627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:53.011 [2024-11-17 01:37:01.452646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:16:53.012 [2024-11-17 01:37:01.452655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.012 [2024-11-17 01:37:01.455933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.012 [2024-11-17 01:37:01.455978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:53.012 [2024-11-17 01:37:01.455989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:16:53.012 [2024-11-17 01:37:01.455997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.012 [2024-11-17 01:37:01.460275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.012 [2024-11-17 01:37:01.460317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:53.012 [2024-11-17 01:37:01.460328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:16:53.012 [2024-11-17 01:37:01.460336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.012 [2024-11-17 01:37:01.467287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.012 [2024-11-17 01:37:01.467468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:53.012 [2024-11-17 01:37:01.467487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:16:53.012 [2024-11-17 01:37:01.467496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.492507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.492554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:53.274 [2024-11-17 01:37:01.492567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.959 ms 00:16:53.274 [2024-11-17 01:37:01.492575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.509274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.509322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:53.274 [2024-11-17 01:37:01.509338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:16:53.274 [2024-11-17 01:37:01.509345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.509510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.509522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:53.274 [2024-11-17 01:37:01.509532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:53.274 [2024-11-17 01:37:01.509542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.535886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.535928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:53.274 [2024-11-17 01:37:01.535939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.316 ms 00:16:53.274 [2024-11-17 01:37:01.535946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.561252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.561447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:53.274 [2024-11-17 01:37:01.561469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.248 ms 00:16:53.274 [2024-11-17 01:37:01.561477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.586586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.586641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:53.274 [2024-11-17 01:37:01.586655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.799 ms 00:16:53.274 [2024-11-17 01:37:01.586663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.274 [2024-11-17 01:37:01.611483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.274 [2024-11-17 01:37:01.611528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:53.274 [2024-11-17 01:37:01.611540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:16:53.275 [2024-11-17 01:37:01.611548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.275 [2024-11-17 01:37:01.611623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:53.275 [2024-11-17 01:37:01.611641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.611998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:53.275 [2024-11-17 01:37:01.612374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:53.276 [2024-11-17 01:37:01.612510] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:53.276 [2024-11-17 01:37:01.612520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:53.276 [2024-11-17 01:37:01.612528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:53.276 [2024-11-17 01:37:01.612535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:53.276 [2024-11-17 01:37:01.612542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:53.276 [2024-11-17 01:37:01.612550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:53.276 [2024-11-17 01:37:01.612559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:53.276 [2024-11-17 01:37:01.612568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:53.276 [2024-11-17 01:37:01.612575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:53.276 [2024-11-17 01:37:01.612581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:53.276 [2024-11-17 01:37:01.612588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:53.276 [2024-11-17 01:37:01.612595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.276 [2024-11-17 01:37:01.612607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:53.276 [2024-11-17 01:37:01.612617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:16:53.276 [2024-11-17 01:37:01.612624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.625283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.276 [2024-11-17 01:37:01.625323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:53.276 [2024-11-17 01:37:01.625335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.639 ms 00:16:53.276 [2024-11-17 01:37:01.625344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.625738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.276 [2024-11-17 01:37:01.625751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:53.276 [2024-11-17 01:37:01.625761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:16:53.276 [2024-11-17 01:37:01.625769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.664359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.276 [2024-11-17 01:37:01.664405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.276 [2024-11-17 01:37:01.664417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.276 [2024-11-17 01:37:01.664426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.664505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.276 [2024-11-17 01:37:01.664514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.276 [2024-11-17 01:37:01.664523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.276 [2024-11-17 01:37:01.664532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.664586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.276 [2024-11-17 01:37:01.664597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.276 [2024-11-17 01:37:01.664606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.276 [2024-11-17 01:37:01.664613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.276 [2024-11-17 01:37:01.664631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.276 [2024-11-17 01:37:01.664645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.276 [2024-11-17 01:37:01.664653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.276 [2024-11-17 01:37:01.664661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.537 [2024-11-17 01:37:01.748270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.537 [2024-11-17 01:37:01.748479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.537 [2024-11-17 01:37:01.748542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.537 [2024-11-17 01:37:01.748566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.537 [2024-11-17 01:37:01.816717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.537 [2024-11-17 01:37:01.816948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.538 [2024-11-17 01:37:01.816969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.816980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.538 [2024-11-17 01:37:01.817062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.538 [2024-11-17 01:37:01.817126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.538 [2024-11-17 01:37:01.817262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:53.538 [2024-11-17 01:37:01.817322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.538 [2024-11-17 01:37:01.817396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.538 [2024-11-17 01:37:01.817465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.538 [2024-11-17 01:37:01.817477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.538 [2024-11-17 01:37:01.817487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.538 [2024-11-17 01:37:01.817644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.437 ms, result 0 00:16:54.107 00:16:54.108 00:16:54.369 01:37:02 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74170 00:16:54.369 01:37:02 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:54.369 01:37:02 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74170 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74170 ']' 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.369 01:37:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:54.369 [2024-11-17 01:37:02.669887] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:54.369 [2024-11-17 01:37:02.670199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74170 ] 00:16:54.630 [2024-11-17 01:37:02.830428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.630 [2024-11-17 01:37:02.931480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.201 01:37:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.201 01:37:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:55.201 01:37:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:55.462 [2024-11-17 01:37:03.825290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.462 [2024-11-17 01:37:03.825362] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.725 [2024-11-17 01:37:04.004958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.005013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.725 [2024-11-17 01:37:04.005030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.725 [2024-11-17 01:37:04.005039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.725 [2024-11-17 01:37:04.007966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.008010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.725 [2024-11-17 01:37:04.008023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:16:55.725 [2024-11-17 01:37:04.008031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.725 [2024-11-17 01:37:04.008142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.725 [2024-11-17 01:37:04.008958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.725 [2024-11-17 01:37:04.008997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.009006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.725 [2024-11-17 01:37:04.009018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:16:55.725 [2024-11-17 01:37:04.009026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.725 [2024-11-17 01:37:04.011000] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:55.725 [2024-11-17 01:37:04.025144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.025197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:55.725 [2024-11-17 01:37:04.025211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.151 ms 00:16:55.725 [2024-11-17 01:37:04.025222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.725 [2024-11-17 01:37:04.025334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.025349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:55.725 [2024-11-17 01:37:04.025359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:55.725 [2024-11-17 01:37:04.025370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.725 [2024-11-17 01:37:04.033230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.725 [2024-11-17 01:37:04.033280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.725 [2024-11-17 01:37:04.033290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.807 ms 00:16:55.726 [2024-11-17 01:37:04.033301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.033413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.033426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.726 [2024-11-17 01:37:04.033435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:55.726 [2024-11-17 01:37:04.033445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.033480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.033492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.726 [2024-11-17 01:37:04.033500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:55.726 [2024-11-17 01:37:04.033510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.033533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:55.726 [2024-11-17 01:37:04.037562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.037740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.726 [2024-11-17 01:37:04.037765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:16:55.726 [2024-11-17 01:37:04.037774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.037865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.037876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.726 [2024-11-17 01:37:04.037886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:55.726 [2024-11-17 01:37:04.037899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.037923] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:55.726 [2024-11-17 01:37:04.037946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:55.726 [2024-11-17 01:37:04.037991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:55.726 [2024-11-17 01:37:04.038007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:55.726 [2024-11-17 01:37:04.038116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.726 [2024-11-17 01:37:04.038129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.726 [2024-11-17 01:37:04.038144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.726 [2024-11-17 01:37:04.038157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:55.726 [2024-11-17 01:37:04.038190] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.726 [2024-11-17 01:37:04.038198] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.726 [2024-11-17 01:37:04.038210] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.726 [2024-11-17 01:37:04.038218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.038230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.726 [2024-11-17 01:37:04.038239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:55.726 [2024-11-17 01:37:04.038249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.038338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.726 [2024-11-17 01:37:04.038350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.726 [2024-11-17 01:37:04.038358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:55.726 [2024-11-17 01:37:04.038368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.726 [2024-11-17 01:37:04.038474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.726 [2024-11-17 01:37:04.038489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.726 [2024-11-17 01:37:04.038497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.726 [2024-11-17 01:37:04.038527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.726 [2024-11-17 01:37:04.038557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.726 [2024-11-17 01:37:04.038577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.726 [2024-11-17 01:37:04.038586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:55.726 [2024-11-17 01:37:04.038593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.726 [2024-11-17 01:37:04.038602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.726 [2024-11-17 01:37:04.038609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:55.726 [2024-11-17 01:37:04.038619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.726 [2024-11-17 01:37:04.038653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.726 [2024-11-17 01:37:04.038685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.726 [2024-11-17 01:37:04.038711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.726 [2024-11-17 01:37:04.038733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.726 [2024-11-17 01:37:04.038757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.726 [2024-11-17 01:37:04.038780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.726 [2024-11-17 01:37:04.038814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.726 [2024-11-17 01:37:04.038822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:55.726 [2024-11-17 01:37:04.038830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.726 [2024-11-17 01:37:04.038839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.726 [2024-11-17 01:37:04.038846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:55.726 [2024-11-17 01:37:04.038856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.726 [2024-11-17 01:37:04.038872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:55.726 [2024-11-17 01:37:04.038879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038889] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.726 [2024-11-17 01:37:04.038897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.726 [2024-11-17 01:37:04.038909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.726 [2024-11-17 01:37:04.038916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.726 [2024-11-17 01:37:04.038926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.726 [2024-11-17 01:37:04.038934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.727 [2024-11-17 01:37:04.038944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.727 [2024-11-17 01:37:04.038952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.727 [2024-11-17 01:37:04.038960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.727 [2024-11-17 01:37:04.038967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.727 [2024-11-17 01:37:04.038977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.727 [2024-11-17 01:37:04.038988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:55.727 [2024-11-17 01:37:04.039010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:55.727 [2024-11-17 01:37:04.039019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:55.727 [2024-11-17 01:37:04.039027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:55.727 [2024-11-17 01:37:04.039036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:55.727 [2024-11-17 01:37:04.039044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:55.727 [2024-11-17 01:37:04.039053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:55.727 [2024-11-17 01:37:04.039060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:55.727 [2024-11-17 01:37:04.039070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:55.727 [2024-11-17 01:37:04.039077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:55.727 [2024-11-17 01:37:04.039121] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.727 [2024-11-17 01:37:04.039130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.727 [2024-11-17 01:37:04.039150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.727 [2024-11-17 01:37:04.039159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.727 [2024-11-17 01:37:04.039167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.727 [2024-11-17 01:37:04.039176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.039184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.727 [2024-11-17 01:37:04.039193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:16:55.727 [2024-11-17 01:37:04.039200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.070903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.071076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.727 [2024-11-17 01:37:04.071155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.640 ms 00:16:55.727 [2024-11-17 01:37:04.071181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.071332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.071418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:55.727 [2024-11-17 01:37:04.071446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:55.727 [2024-11-17 01:37:04.071466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.106572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.106745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.727 [2024-11-17 01:37:04.106852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.784 ms 00:16:55.727 [2024-11-17 01:37:04.106879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.106979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.107007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.727 [2024-11-17 01:37:04.107087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.727 [2024-11-17 01:37:04.107110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.107631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.107835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.727 [2024-11-17 01:37:04.107907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:16:55.727 [2024-11-17 01:37:04.107931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.108094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.108162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.727 [2024-11-17 01:37:04.108191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:16:55.727 [2024-11-17 01:37:04.108212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.125822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.125977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.727 [2024-11-17 01:37:04.126036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.513 ms 00:16:55.727 [2024-11-17 01:37:04.126059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.140035] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:55.727 [2024-11-17 01:37:04.140208] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:55.727 [2024-11-17 01:37:04.140277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.140299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:55.727 [2024-11-17 01:37:04.140322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.087 ms 00:16:55.727 [2024-11-17 01:37:04.140342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.166709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.166896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:55.727 [2024-11-17 01:37:04.166966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.272 ms 00:16:55.727 [2024-11-17 01:37:04.166994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.727 [2024-11-17 01:37:04.179914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.727 [2024-11-17 01:37:04.180087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:55.727 [2024-11-17 01:37:04.180150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.815 ms 00:16:55.727 [2024-11-17 01:37:04.180173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.989 [2024-11-17 01:37:04.192900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.193054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:55.990 [2024-11-17 01:37:04.193115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.527 ms 00:16:55.990 [2024-11-17 01:37:04.193137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.193883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.194011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:55.990 [2024-11-17 01:37:04.194081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:16:55.990 [2024-11-17 01:37:04.194103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.273907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.274135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:55.990 [2024-11-17 01:37:04.274209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.732 ms 00:16:55.990 [2024-11-17 01:37:04.274235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.285302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:55.990 [2024-11-17 01:37:04.304073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.304266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:55.990 [2024-11-17 01:37:04.304289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.737 ms 00:16:55.990 [2024-11-17 01:37:04.304301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.304392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.304406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:55.990 [2024-11-17 01:37:04.304415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:55.990 [2024-11-17 01:37:04.304427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.304495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.304508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:55.990 [2024-11-17 01:37:04.304516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:55.990 [2024-11-17 01:37:04.304526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.304554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.304565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:55.990 [2024-11-17 01:37:04.304574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:55.990 [2024-11-17 01:37:04.304587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.304621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:55.990 [2024-11-17 01:37:04.304637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.304645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:55.990 [2024-11-17 01:37:04.304659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:55.990 [2024-11-17 01:37:04.304667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.331106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.331284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:55.990 [2024-11-17 01:37:04.331312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.406 ms 00:16:55.990 [2024-11-17 01:37:04.331321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.331430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.990 [2024-11-17 01:37:04.331441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:55.990 [2024-11-17 01:37:04.331453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:55.990 [2024-11-17 01:37:04.331466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.990 [2024-11-17 01:37:04.332828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.990 [2024-11-17 01:37:04.336264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.416 ms, result 0 00:16:55.990 [2024-11-17 01:37:04.338374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.990 Some configs were skipped because the RPC state that can call them passed over. 00:16:55.990 01:37:04 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:56.251 [2024-11-17 01:37:04.587226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.251 [2024-11-17 01:37:04.587398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:56.251 [2024-11-17 01:37:04.587462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:16:56.251 [2024-11-17 01:37:04.587489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.251 [2024-11-17 01:37:04.587543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.531 ms, result 0 00:16:56.251 true 00:16:56.251 01:37:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:56.513 [2024-11-17 01:37:04.806765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.513 [2024-11-17 01:37:04.806839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:56.513 [2024-11-17 01:37:04.806854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:16:56.513 [2024-11-17 01:37:04.806862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.513 [2024-11-17 01:37:04.806900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.621 ms, result 0 00:16:56.513 true 00:16:56.513 01:37:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74170 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74170 ']' 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74170 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74170 00:16:56.513 killing process with pid 74170 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74170' 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74170 00:16:56.513 01:37:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74170 00:16:57.458 [2024-11-17 01:37:05.689330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.689434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.458 [2024-11-17 01:37:05.689452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:57.458 [2024-11-17 01:37:05.689463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.689490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:57.458 [2024-11-17 01:37:05.692963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.693014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.458 [2024-11-17 01:37:05.693032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:16:57.458 [2024-11-17 01:37:05.693040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.693386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.693399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.458 [2024-11-17 01:37:05.693412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:16:57.458 [2024-11-17 01:37:05.693421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.698618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.698665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.458 [2024-11-17 01:37:05.698683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:16:57.458 [2024-11-17 01:37:05.698692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.705671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.705719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:57.458 [2024-11-17 01:37:05.705735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:16:57.458 [2024-11-17 01:37:05.705745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.717232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.717281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.458 [2024-11-17 01:37:05.717300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.378 ms 00:16:57.458 [2024-11-17 01:37:05.717317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.727184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.727234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.458 [2024-11-17 01:37:05.727251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.790 ms 00:16:57.458 [2024-11-17 01:37:05.727260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.727432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.727445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.458 [2024-11-17 01:37:05.727457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:57.458 [2024-11-17 01:37:05.727465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.739172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.739387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:57.458 [2024-11-17 01:37:05.739414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.680 ms 00:16:57.458 [2024-11-17 01:37:05.739422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.750679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.750906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:57.458 [2024-11-17 01:37:05.750937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.120 ms 00:16:57.458 [2024-11-17 01:37:05.750944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.761686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.761734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.458 [2024-11-17 01:37:05.761750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.671 ms 00:16:57.458 [2024-11-17 01:37:05.761758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.458 [2024-11-17 01:37:05.772295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.458 [2024-11-17 01:37:05.772341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.458 [2024-11-17 01:37:05.772355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.416 ms 00:16:57.458 [2024-11-17 01:37:05.772362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.459 [2024-11-17 01:37:05.772429] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.459 [2024-11-17 01:37:05.772447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.772973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.459 [2024-11-17 01:37:05.773315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.460 [2024-11-17 01:37:05.773436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.460 [2024-11-17 01:37:05.773453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:16:57.460 [2024-11-17 01:37:05.773469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:57.460 [2024-11-17 01:37:05.773484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:57.460 [2024-11-17 01:37:05.773491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:57.460 [2024-11-17 01:37:05.773502] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:57.460 [2024-11-17 01:37:05.773509] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.460 [2024-11-17 01:37:05.773519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.460 [2024-11-17 01:37:05.773527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.460 [2024-11-17 01:37:05.773535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.460 [2024-11-17 01:37:05.773542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.460 [2024-11-17 01:37:05.773553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.460 [2024-11-17 01:37:05.773562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.460 [2024-11-17 01:37:05.773575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:16:57.460 [2024-11-17 01:37:05.773583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.788703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.460 [2024-11-17 01:37:05.788927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.460 [2024-11-17 01:37:05.788957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.074 ms 00:16:57.460 [2024-11-17 01:37:05.788966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.789456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.460 [2024-11-17 01:37:05.789480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.460 [2024-11-17 01:37:05.789493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:16:57.460 [2024-11-17 01:37:05.789505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.840614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.460 [2024-11-17 01:37:05.840740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.460 [2024-11-17 01:37:05.840760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.460 [2024-11-17 01:37:05.840768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.840865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.460 [2024-11-17 01:37:05.840875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.460 [2024-11-17 01:37:05.840887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.460 [2024-11-17 01:37:05.840897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.840944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.460 [2024-11-17 01:37:05.840963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.460 [2024-11-17 01:37:05.840975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.460 [2024-11-17 01:37:05.840982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.460 [2024-11-17 01:37:05.841002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.460 [2024-11-17 01:37:05.841010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.460 [2024-11-17 01:37:05.841020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.460 [2024-11-17 01:37:05.841027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.921453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.921497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.721 [2024-11-17 01:37:05.921511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.921521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.987677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.987884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.721 [2024-11-17 01:37:05.987905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.987917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.987990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.721 [2024-11-17 01:37:05.988014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.721 [2024-11-17 01:37:05.988074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.721 [2024-11-17 01:37:05.988214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.721 [2024-11-17 01:37:05.988279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.721 [2024-11-17 01:37:05.988357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.721 [2024-11-17 01:37:05.988428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.721 [2024-11-17 01:37:05.988439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.721 [2024-11-17 01:37:05.988448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.721 [2024-11-17 01:37:05.988603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 299.252 ms, result 0 00:16:58.665 01:37:06 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.665 [2024-11-17 01:37:06.844286] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:58.665 [2024-11-17 01:37:06.844643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:16:58.665 [2024-11-17 01:37:07.010166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.926 [2024-11-17 01:37:07.149188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.187 [2024-11-17 01:37:07.479885] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.187 [2024-11-17 01:37:07.479972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.449 [2024-11-17 01:37:07.646222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.449 [2024-11-17 01:37:07.646505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.449 [2024-11-17 01:37:07.646535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:59.449 [2024-11-17 01:37:07.646546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.449 [2024-11-17 01:37:07.649754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.449 [2024-11-17 01:37:07.649965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.449 [2024-11-17 01:37:07.649989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:16:59.449 [2024-11-17 01:37:07.649999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.449 [2024-11-17 01:37:07.650709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.449 [2024-11-17 01:37:07.653142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.449 [2024-11-17 01:37:07.653523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.449 [2024-11-17 01:37:07.653566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.449 [2024-11-17 01:37:07.653595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:16:59.449 [2024-11-17 01:37:07.653618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.657088] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:59.450 [2024-11-17 01:37:07.675711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.675769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:59.450 [2024-11-17 01:37:07.675783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.630 ms 00:16:59.450 [2024-11-17 01:37:07.675810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.675933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.675964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:59.450 [2024-11-17 01:37:07.675975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:59.450 [2024-11-17 01:37:07.675984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.687583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.687827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.450 [2024-11-17 01:37:07.687858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.550 ms 00:16:59.450 [2024-11-17 01:37:07.687867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.687997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.688009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.450 [2024-11-17 01:37:07.688018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:59.450 [2024-11-17 01:37:07.688027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.688059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.688069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.450 [2024-11-17 01:37:07.688077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.450 [2024-11-17 01:37:07.688085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.688108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.450 [2024-11-17 01:37:07.692773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.692826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.450 [2024-11-17 01:37:07.692838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:16:59.450 [2024-11-17 01:37:07.692847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.692911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.692922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.450 [2024-11-17 01:37:07.692932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:59.450 [2024-11-17 01:37:07.692940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.692967] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:59.450 [2024-11-17 01:37:07.692993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:59.450 [2024-11-17 01:37:07.693036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:59.450 [2024-11-17 01:37:07.693055] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:59.450 [2024-11-17 01:37:07.693167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.450 [2024-11-17 01:37:07.693179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.450 [2024-11-17 01:37:07.693191] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:59.450 [2024-11-17 01:37:07.693207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693218] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693227] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.450 [2024-11-17 01:37:07.693238] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.450 [2024-11-17 01:37:07.693247] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.450 [2024-11-17 01:37:07.693256] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.450 [2024-11-17 01:37:07.693264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.693273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.450 [2024-11-17 01:37:07.693281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:16:59.450 [2024-11-17 01:37:07.693289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.693380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.450 [2024-11-17 01:37:07.693393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.450 [2024-11-17 01:37:07.693401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:59.450 [2024-11-17 01:37:07.693408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.450 [2024-11-17 01:37:07.693510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.450 [2024-11-17 01:37:07.693531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.450 [2024-11-17 01:37:07.693541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.450 [2024-11-17 01:37:07.693567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.450 [2024-11-17 01:37:07.693590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.450 [2024-11-17 01:37:07.693603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.450 [2024-11-17 01:37:07.693610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.450 [2024-11-17 01:37:07.693618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.450 [2024-11-17 01:37:07.693634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.450 [2024-11-17 01:37:07.693640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:59.450 [2024-11-17 01:37:07.693647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.450 [2024-11-17 01:37:07.693665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.450 [2024-11-17 01:37:07.693687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.450 [2024-11-17 01:37:07.693707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.450 [2024-11-17 01:37:07.693729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.450 [2024-11-17 01:37:07.693750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.450 [2024-11-17 01:37:07.693771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.450 [2024-11-17 01:37:07.693784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.450 [2024-11-17 01:37:07.693819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:59.450 [2024-11-17 01:37:07.693827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.450 [2024-11-17 01:37:07.693836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.450 [2024-11-17 01:37:07.693844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:59.450 [2024-11-17 01:37:07.693851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.450 [2024-11-17 01:37:07.693866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:59.450 [2024-11-17 01:37:07.693873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693880] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.450 [2024-11-17 01:37:07.693888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.450 [2024-11-17 01:37:07.693900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.450 [2024-11-17 01:37:07.693916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.450 [2024-11-17 01:37:07.693928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.450 [2024-11-17 01:37:07.693938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.450 [2024-11-17 01:37:07.693946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.450 [2024-11-17 01:37:07.693953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.450 [2024-11-17 01:37:07.693960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.450 [2024-11-17 01:37:07.693968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.451 [2024-11-17 01:37:07.693978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.693988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.451 [2024-11-17 01:37:07.693996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:59.451 [2024-11-17 01:37:07.694003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.451 [2024-11-17 01:37:07.694010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:59.451 [2024-11-17 01:37:07.694018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:59.451 [2024-11-17 01:37:07.694026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:59.451 [2024-11-17 01:37:07.694034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:59.451 [2024-11-17 01:37:07.694040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:59.451 [2024-11-17 01:37:07.694047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:59.451 [2024-11-17 01:37:07.694054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:59.451 [2024-11-17 01:37:07.694092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.451 [2024-11-17 01:37:07.694101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.451 [2024-11-17 01:37:07.694118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.451 [2024-11-17 01:37:07.694126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.451 [2024-11-17 01:37:07.694134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.451 [2024-11-17 01:37:07.694142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.694154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.451 [2024-11-17 01:37:07.694173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:16:59.451 [2024-11-17 01:37:07.694182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.732680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.732734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.451 [2024-11-17 01:37:07.732748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.437 ms 00:16:59.451 [2024-11-17 01:37:07.732757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.732926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.732940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.451 [2024-11-17 01:37:07.732951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:59.451 [2024-11-17 01:37:07.732960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.782353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.782413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.451 [2024-11-17 01:37:07.782432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.367 ms 00:16:59.451 [2024-11-17 01:37:07.782442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.782583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.782597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.451 [2024-11-17 01:37:07.782608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.451 [2024-11-17 01:37:07.782617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.783347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.783384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.451 [2024-11-17 01:37:07.783396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:16:59.451 [2024-11-17 01:37:07.783412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.783588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.783609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.451 [2024-11-17 01:37:07.783619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:59.451 [2024-11-17 01:37:07.783628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.802641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.802693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.451 [2024-11-17 01:37:07.802705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.989 ms 00:16:59.451 [2024-11-17 01:37:07.802714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.818160] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:59.451 [2024-11-17 01:37:07.818231] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:59.451 [2024-11-17 01:37:07.818245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.818255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:59.451 [2024-11-17 01:37:07.818266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.374 ms 00:16:59.451 [2024-11-17 01:37:07.818274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.844909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.845130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:59.451 [2024-11-17 01:37:07.845153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.531 ms 00:16:59.451 [2024-11-17 01:37:07.845163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.858320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.858375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:59.451 [2024-11-17 01:37:07.858388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 00:16:59.451 [2024-11-17 01:37:07.858397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.871629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.871689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:59.451 [2024-11-17 01:37:07.871702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.139 ms 00:16:59.451 [2024-11-17 01:37:07.871710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.451 [2024-11-17 01:37:07.872427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.451 [2024-11-17 01:37:07.872463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.451 [2024-11-17 01:37:07.872475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:16:59.451 [2024-11-17 01:37:07.872484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.947196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.947420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.713 [2024-11-17 01:37:07.947448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.683 ms 00:16:59.713 [2024-11-17 01:37:07.947459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.959715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.713 [2024-11-17 01:37:07.984634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.984695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.713 [2024-11-17 01:37:07.984710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.077 ms 00:16:59.713 [2024-11-17 01:37:07.984727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.984856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.984870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.713 [2024-11-17 01:37:07.984881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:59.713 [2024-11-17 01:37:07.984893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.984961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.984972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.713 [2024-11-17 01:37:07.984982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:59.713 [2024-11-17 01:37:07.984997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.985030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.985040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.713 [2024-11-17 01:37:07.985049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:59.713 [2024-11-17 01:37:07.985058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:07.985100] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.713 [2024-11-17 01:37:07.985112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:07.985121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.713 [2024-11-17 01:37:07.985130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:59.713 [2024-11-17 01:37:07.985139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:08.012441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:08.012496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.713 [2024-11-17 01:37:08.012511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.275 ms 00:16:59.713 [2024-11-17 01:37:08.012521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:08.012662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.713 [2024-11-17 01:37:08.012677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.713 [2024-11-17 01:37:08.012687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:59.713 [2024-11-17 01:37:08.012696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.713 [2024-11-17 01:37:08.014322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.713 [2024-11-17 01:37:08.017919] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.697 ms, result 0 00:16:59.713 [2024-11-17 01:37:08.019358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.713 [2024-11-17 01:37:08.033022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.658  [2024-11-17T01:37:10.516Z] Copying: 22/256 [MB] (22 MBps) [2024-11-17T01:37:11.461Z] Copying: 43/256 [MB] (20 MBps) [2024-11-17T01:37:12.403Z] Copying: 58/256 [MB] (14 MBps) [2024-11-17T01:37:13.345Z] Copying: 70/256 [MB] (12 MBps) [2024-11-17T01:37:14.283Z] Copying: 84/256 [MB] (14 MBps) [2024-11-17T01:37:15.227Z] Copying: 96/256 [MB] (11 MBps) [2024-11-17T01:37:16.171Z] Copying: 107/256 [MB] (10 MBps) [2024-11-17T01:37:17.113Z] Copying: 117/256 [MB] (10 MBps) [2024-11-17T01:37:18.501Z] Copying: 129/256 [MB] (12 MBps) [2024-11-17T01:37:19.445Z] Copying: 144/256 [MB] (15 MBps) [2024-11-17T01:37:20.457Z] Copying: 157/256 [MB] (12 MBps) [2024-11-17T01:37:21.441Z] Copying: 168/256 [MB] (10 MBps) [2024-11-17T01:37:22.387Z] Copying: 179/256 [MB] (11 MBps) [2024-11-17T01:37:23.328Z] Copying: 191/256 [MB] (11 MBps) [2024-11-17T01:37:24.272Z] Copying: 202/256 [MB] (11 MBps) [2024-11-17T01:37:25.216Z] Copying: 213/256 [MB] (11 MBps) [2024-11-17T01:37:26.157Z] Copying: 224/256 [MB] (10 MBps) [2024-11-17T01:37:27.542Z] Copying: 236/256 [MB] (11 MBps) [2024-11-17T01:37:28.113Z] Copying: 246/256 [MB] (10 MBps) [2024-11-17T01:37:28.374Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-17 01:37:28.262769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.915 [2024-11-17 01:37:28.273105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.273159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.915 [2024-11-17 01:37:28.273176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.915 [2024-11-17 01:37:28.273195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.273222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.915 [2024-11-17 01:37:28.277155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.277198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.915 [2024-11-17 01:37:28.277212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.916 ms 00:17:19.915 [2024-11-17 01:37:28.277221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.277613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.277638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.915 [2024-11-17 01:37:28.277649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:19.915 [2024-11-17 01:37:28.277658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.281523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.281558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.915 [2024-11-17 01:37:28.281568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:17:19.915 [2024-11-17 01:37:28.281577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.288676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.290122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.915 [2024-11-17 01:37:28.290148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.079 ms 00:17:19.915 [2024-11-17 01:37:28.290158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.316779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.316980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.915 [2024-11-17 01:37:28.317002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.536 ms 00:17:19.915 [2024-11-17 01:37:28.317011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.332900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.332950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.915 [2024-11-17 01:37:28.332971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.727 ms 00:17:19.915 [2024-11-17 01:37:28.332980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.333121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.333134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.915 [2024-11-17 01:37:28.333144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:19.915 [2024-11-17 01:37:28.333153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.915 [2024-11-17 01:37:28.358700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.915 [2024-11-17 01:37:28.358747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.915 [2024-11-17 01:37:28.358759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:17:19.915 [2024-11-17 01:37:28.358767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.175 [2024-11-17 01:37:28.384155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.175 [2024-11-17 01:37:28.384341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.175 [2024-11-17 01:37:28.384364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.304 ms 00:17:20.175 [2024-11-17 01:37:28.384372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.175 [2024-11-17 01:37:28.407640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.176 [2024-11-17 01:37:28.407759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.176 [2024-11-17 01:37:28.407774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.151 ms 00:17:20.176 [2024-11-17 01:37:28.407781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.176 [2024-11-17 01:37:28.430083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.176 [2024-11-17 01:37:28.430205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.176 [2024-11-17 01:37:28.430220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:17:20.176 [2024-11-17 01:37:28.430228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.176 [2024-11-17 01:37:28.430258] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.176 [2024-11-17 01:37:28.430272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.176 [2024-11-17 01:37:28.430909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.430999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.177 [2024-11-17 01:37:28.431087] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.177 [2024-11-17 01:37:28.431095] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd4ec39-87e8-43bf-a69d-a90d3ef6bdcb 00:17:20.177 [2024-11-17 01:37:28.431103] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.177 [2024-11-17 01:37:28.431110] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.177 [2024-11-17 01:37:28.431118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.177 [2024-11-17 01:37:28.431126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.177 [2024-11-17 01:37:28.431133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.177 [2024-11-17 01:37:28.431140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.177 [2024-11-17 01:37:28.431150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.177 [2024-11-17 01:37:28.431156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.177 [2024-11-17 01:37:28.431162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.177 [2024-11-17 01:37:28.431169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.177 [2024-11-17 01:37:28.431176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.177 [2024-11-17 01:37:28.431184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:17:20.177 [2024-11-17 01:37:28.431192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.443508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.177 [2024-11-17 01:37:28.443535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.177 [2024-11-17 01:37:28.443545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.299 ms 00:17:20.177 [2024-11-17 01:37:28.443553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.443942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.177 [2024-11-17 01:37:28.443954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.177 [2024-11-17 01:37:28.443962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:17:20.177 [2024-11-17 01:37:28.443970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.479204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.479234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.177 [2024-11-17 01:37:28.479244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.479251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.479329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.479338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.177 [2024-11-17 01:37:28.479346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.479354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.479396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.479405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.177 [2024-11-17 01:37:28.479413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.479421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.479441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.479449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.177 [2024-11-17 01:37:28.479457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.479465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.556366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.556409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.177 [2024-11-17 01:37:28.556421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.556429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.177 [2024-11-17 01:37:28.620599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.620607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.177 [2024-11-17 01:37:28.620681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.620689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.177 [2024-11-17 01:37:28.620738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.620745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.177 [2024-11-17 01:37:28.620876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.620883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.177 [2024-11-17 01:37:28.620935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.620942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.620979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.620987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.177 [2024-11-17 01:37:28.620994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.621001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.621042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.177 [2024-11-17 01:37:28.621055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.177 [2024-11-17 01:37:28.621063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.177 [2024-11-17 01:37:28.621070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.177 [2024-11-17 01:37:28.621203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.108 ms, result 0 00:17:21.118 00:17:21.118 00:17:21.118 01:37:29 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:21.686 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:21.686 01:37:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74170 00:17:21.686 01:37:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74170 ']' 00:17:21.686 01:37:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74170 00:17:21.686 Process with pid 74170 is not found 00:17:21.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74170) - No such process 00:17:21.686 01:37:29 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74170 is not found' 00:17:21.686 00:17:21.686 real 1m24.700s 00:17:21.686 user 1m47.255s 00:17:21.686 sys 0m6.029s 00:17:21.686 01:37:29 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.686 ************************************ 00:17:21.686 END TEST ftl_trim 00:17:21.686 ************************************ 00:17:21.686 01:37:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:21.686 01:37:30 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:21.686 01:37:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.686 01:37:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.686 01:37:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:21.686 ************************************ 00:17:21.686 START TEST ftl_restore 00:17:21.686 ************************************ 00:17:21.686 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:21.686 * Looking for test storage... 00:17:21.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.686 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.947 01:37:30 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.947 --rc genhtml_branch_coverage=1 00:17:21.947 --rc genhtml_function_coverage=1 00:17:21.947 --rc genhtml_legend=1 00:17:21.947 --rc geninfo_all_blocks=1 00:17:21.947 --rc geninfo_unexecuted_blocks=1 00:17:21.947 00:17:21.947 ' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.947 --rc genhtml_branch_coverage=1 00:17:21.947 --rc genhtml_function_coverage=1 00:17:21.947 --rc genhtml_legend=1 00:17:21.947 --rc geninfo_all_blocks=1 00:17:21.947 --rc geninfo_unexecuted_blocks=1 00:17:21.947 00:17:21.947 ' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.947 --rc genhtml_branch_coverage=1 00:17:21.947 --rc genhtml_function_coverage=1 00:17:21.947 --rc genhtml_legend=1 00:17:21.947 --rc geninfo_all_blocks=1 00:17:21.947 --rc geninfo_unexecuted_blocks=1 00:17:21.947 00:17:21.947 ' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.947 --rc genhtml_branch_coverage=1 00:17:21.947 --rc genhtml_function_coverage=1 00:17:21.947 --rc genhtml_legend=1 00:17:21.947 --rc geninfo_all_blocks=1 00:17:21.947 --rc geninfo_unexecuted_blocks=1 00:17:21.947 00:17:21.947 ' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NRk3O2JJ1x 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74536 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74536 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74536 ']' 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.947 01:37:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.947 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.948 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.948 01:37:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:21.948 [2024-11-17 01:37:30.324437] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:21.948 [2024-11-17 01:37:30.324841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74536 ] 00:17:22.208 [2024-11-17 01:37:30.487625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.208 [2024-11-17 01:37:30.584087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.779 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.779 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:22.779 01:37:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:23.351 01:37:31 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:23.351 01:37:31 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:23.351 01:37:31 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:23.351 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:23.351 { 00:17:23.351 "name": "nvme0n1", 00:17:23.351 "aliases": [ 00:17:23.351 "4d2c0821-039b-41fb-a0ea-0850906296b8" 00:17:23.351 ], 00:17:23.351 "product_name": "NVMe disk", 00:17:23.351 "block_size": 4096, 00:17:23.351 "num_blocks": 1310720, 00:17:23.351 "uuid": "4d2c0821-039b-41fb-a0ea-0850906296b8", 00:17:23.351 "numa_id": -1, 00:17:23.351 "assigned_rate_limits": { 00:17:23.351 "rw_ios_per_sec": 0, 00:17:23.351 "rw_mbytes_per_sec": 0, 00:17:23.351 "r_mbytes_per_sec": 0, 00:17:23.351 "w_mbytes_per_sec": 0 00:17:23.351 }, 00:17:23.351 "claimed": true, 00:17:23.351 "claim_type": "read_many_write_one", 00:17:23.351 "zoned": false, 00:17:23.351 "supported_io_types": { 00:17:23.351 "read": true, 00:17:23.351 "write": true, 00:17:23.351 "unmap": true, 00:17:23.351 "flush": true, 00:17:23.352 "reset": true, 00:17:23.352 "nvme_admin": true, 00:17:23.352 "nvme_io": true, 00:17:23.352 "nvme_io_md": false, 00:17:23.352 "write_zeroes": true, 00:17:23.352 "zcopy": false, 00:17:23.352 "get_zone_info": false, 00:17:23.352 "zone_management": false, 00:17:23.352 "zone_append": false, 00:17:23.352 "compare": true, 00:17:23.352 "compare_and_write": false, 00:17:23.352 "abort": true, 00:17:23.352 "seek_hole": false, 00:17:23.352 "seek_data": false, 00:17:23.352 "copy": true, 00:17:23.352 "nvme_iov_md": false 00:17:23.352 }, 00:17:23.352 "driver_specific": { 00:17:23.352 "nvme": [ 00:17:23.352 { 00:17:23.352 "pci_address": "0000:00:11.0", 00:17:23.352 "trid": { 00:17:23.352 "trtype": "PCIe", 00:17:23.352 "traddr": "0000:00:11.0" 00:17:23.352 }, 00:17:23.352 "ctrlr_data": { 00:17:23.352 "cntlid": 0, 00:17:23.352 "vendor_id": "0x1b36", 00:17:23.352 "model_number": "QEMU NVMe Ctrl", 00:17:23.352 "serial_number": "12341", 00:17:23.352 "firmware_revision": "8.0.0", 00:17:23.352 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:23.352 "oacs": { 00:17:23.352 "security": 0, 00:17:23.352 "format": 1, 00:17:23.352 "firmware": 0, 00:17:23.352 "ns_manage": 1 00:17:23.352 }, 00:17:23.352 "multi_ctrlr": false, 00:17:23.352 "ana_reporting": false 00:17:23.352 }, 00:17:23.352 "vs": { 00:17:23.352 "nvme_version": "1.4" 00:17:23.352 }, 00:17:23.352 "ns_data": { 00:17:23.352 "id": 1, 00:17:23.352 "can_share": false 00:17:23.352 } 00:17:23.352 } 00:17:23.352 ], 00:17:23.352 "mp_policy": "active_passive" 00:17:23.352 } 00:17:23.352 } 00:17:23.352 ]' 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:23.352 01:37:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:23.352 01:37:31 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:23.352 01:37:31 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:23.352 01:37:31 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:23.352 01:37:31 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:23.352 01:37:31 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:23.613 01:37:31 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c144c11f-ddad-4dce-9c9c-2085828e92bf 00:17:23.613 01:37:31 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:23.613 01:37:31 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c144c11f-ddad-4dce-9c9c-2085828e92bf 00:17:23.873 01:37:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:24.134 01:37:32 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b5e8cb36-5f76-40fb-895c-ae2223e5d461 00:17:24.134 01:37:32 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b5e8cb36-5f76-40fb-895c-ae2223e5d461 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:24.395 01:37:32 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.395 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.395 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:24.395 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:24.395 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:24.395 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:24.657 { 00:17:24.657 "name": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:24.657 "aliases": [ 00:17:24.657 "lvs/nvme0n1p0" 00:17:24.657 ], 00:17:24.657 "product_name": "Logical Volume", 00:17:24.657 "block_size": 4096, 00:17:24.657 "num_blocks": 26476544, 00:17:24.657 "uuid": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:24.657 "assigned_rate_limits": { 00:17:24.657 "rw_ios_per_sec": 0, 00:17:24.657 "rw_mbytes_per_sec": 0, 00:17:24.657 "r_mbytes_per_sec": 0, 00:17:24.657 "w_mbytes_per_sec": 0 00:17:24.657 }, 00:17:24.657 "claimed": false, 00:17:24.657 "zoned": false, 00:17:24.657 "supported_io_types": { 00:17:24.657 "read": true, 00:17:24.657 "write": true, 00:17:24.657 "unmap": true, 00:17:24.657 "flush": false, 00:17:24.657 "reset": true, 00:17:24.657 "nvme_admin": false, 00:17:24.657 "nvme_io": false, 00:17:24.657 "nvme_io_md": false, 00:17:24.657 "write_zeroes": true, 00:17:24.657 "zcopy": false, 00:17:24.657 "get_zone_info": false, 00:17:24.657 "zone_management": false, 00:17:24.657 "zone_append": false, 00:17:24.657 "compare": false, 00:17:24.657 "compare_and_write": false, 00:17:24.657 "abort": false, 00:17:24.657 "seek_hole": true, 00:17:24.657 "seek_data": true, 00:17:24.657 "copy": false, 00:17:24.657 "nvme_iov_md": false 00:17:24.657 }, 00:17:24.657 "driver_specific": { 00:17:24.657 "lvol": { 00:17:24.657 "lvol_store_uuid": "b5e8cb36-5f76-40fb-895c-ae2223e5d461", 00:17:24.657 "base_bdev": "nvme0n1", 00:17:24.657 "thin_provision": true, 00:17:24.657 "num_allocated_clusters": 0, 00:17:24.657 "snapshot": false, 00:17:24.657 "clone": false, 00:17:24.657 "esnap_clone": false 00:17:24.657 } 00:17:24.657 } 00:17:24.657 } 00:17:24.657 ]' 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:24.657 01:37:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:24.657 01:37:32 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:24.657 01:37:32 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:24.657 01:37:32 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:24.919 01:37:33 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:24.919 01:37:33 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:24.919 01:37:33 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.919 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:24.919 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:24.919 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:24.919 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:24.919 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.180 { 00:17:25.180 "name": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:25.180 "aliases": [ 00:17:25.180 "lvs/nvme0n1p0" 00:17:25.180 ], 00:17:25.180 "product_name": "Logical Volume", 00:17:25.180 "block_size": 4096, 00:17:25.180 "num_blocks": 26476544, 00:17:25.180 "uuid": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:25.180 "assigned_rate_limits": { 00:17:25.180 "rw_ios_per_sec": 0, 00:17:25.180 "rw_mbytes_per_sec": 0, 00:17:25.180 "r_mbytes_per_sec": 0, 00:17:25.180 "w_mbytes_per_sec": 0 00:17:25.180 }, 00:17:25.180 "claimed": false, 00:17:25.180 "zoned": false, 00:17:25.180 "supported_io_types": { 00:17:25.180 "read": true, 00:17:25.180 "write": true, 00:17:25.180 "unmap": true, 00:17:25.180 "flush": false, 00:17:25.180 "reset": true, 00:17:25.180 "nvme_admin": false, 00:17:25.180 "nvme_io": false, 00:17:25.180 "nvme_io_md": false, 00:17:25.180 "write_zeroes": true, 00:17:25.180 "zcopy": false, 00:17:25.180 "get_zone_info": false, 00:17:25.180 "zone_management": false, 00:17:25.180 "zone_append": false, 00:17:25.180 "compare": false, 00:17:25.180 "compare_and_write": false, 00:17:25.180 "abort": false, 00:17:25.180 "seek_hole": true, 00:17:25.180 "seek_data": true, 00:17:25.180 "copy": false, 00:17:25.180 "nvme_iov_md": false 00:17:25.180 }, 00:17:25.180 "driver_specific": { 00:17:25.180 "lvol": { 00:17:25.180 "lvol_store_uuid": "b5e8cb36-5f76-40fb-895c-ae2223e5d461", 00:17:25.180 "base_bdev": "nvme0n1", 00:17:25.180 "thin_provision": true, 00:17:25.180 "num_allocated_clusters": 0, 00:17:25.180 "snapshot": false, 00:17:25.180 "clone": false, 00:17:25.180 "esnap_clone": false 00:17:25.180 } 00:17:25.180 } 00:17:25.180 } 00:17:25.180 ]' 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.180 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.180 01:37:33 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.180 01:37:33 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:25.441 01:37:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:25.441 01:37:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4502aaf-c034-4ee6-ad0e-f19a39d02415 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.441 { 00:17:25.441 "name": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:25.441 "aliases": [ 00:17:25.441 "lvs/nvme0n1p0" 00:17:25.441 ], 00:17:25.441 "product_name": "Logical Volume", 00:17:25.441 "block_size": 4096, 00:17:25.441 "num_blocks": 26476544, 00:17:25.441 "uuid": "e4502aaf-c034-4ee6-ad0e-f19a39d02415", 00:17:25.441 "assigned_rate_limits": { 00:17:25.441 "rw_ios_per_sec": 0, 00:17:25.441 "rw_mbytes_per_sec": 0, 00:17:25.441 "r_mbytes_per_sec": 0, 00:17:25.441 "w_mbytes_per_sec": 0 00:17:25.441 }, 00:17:25.441 "claimed": false, 00:17:25.441 "zoned": false, 00:17:25.441 "supported_io_types": { 00:17:25.441 "read": true, 00:17:25.441 "write": true, 00:17:25.441 "unmap": true, 00:17:25.441 "flush": false, 00:17:25.441 "reset": true, 00:17:25.441 "nvme_admin": false, 00:17:25.441 "nvme_io": false, 00:17:25.441 "nvme_io_md": false, 00:17:25.441 "write_zeroes": true, 00:17:25.441 "zcopy": false, 00:17:25.441 "get_zone_info": false, 00:17:25.441 "zone_management": false, 00:17:25.441 "zone_append": false, 00:17:25.441 "compare": false, 00:17:25.441 "compare_and_write": false, 00:17:25.441 "abort": false, 00:17:25.441 "seek_hole": true, 00:17:25.441 "seek_data": true, 00:17:25.441 "copy": false, 00:17:25.441 "nvme_iov_md": false 00:17:25.441 }, 00:17:25.441 "driver_specific": { 00:17:25.441 "lvol": { 00:17:25.441 "lvol_store_uuid": "b5e8cb36-5f76-40fb-895c-ae2223e5d461", 00:17:25.441 "base_bdev": "nvme0n1", 00:17:25.441 "thin_provision": true, 00:17:25.441 "num_allocated_clusters": 0, 00:17:25.441 "snapshot": false, 00:17:25.441 "clone": false, 00:17:25.441 "esnap_clone": false 00:17:25.441 } 00:17:25.441 } 00:17:25.441 } 00:17:25.441 ]' 00:17:25.441 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.705 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.705 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.705 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.705 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.705 01:37:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e4502aaf-c034-4ee6-ad0e-f19a39d02415 --l2p_dram_limit 10' 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:25.705 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:25.705 01:37:33 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e4502aaf-c034-4ee6-ad0e-f19a39d02415 --l2p_dram_limit 10 -c nvc0n1p0 00:17:25.705 [2024-11-17 01:37:34.121433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.121472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.705 [2024-11-17 01:37:34.121485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.705 [2024-11-17 01:37:34.121492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.121536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.121544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.705 [2024-11-17 01:37:34.121552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:25.705 [2024-11-17 01:37:34.121558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.121577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.705 [2024-11-17 01:37:34.122165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.705 [2024-11-17 01:37:34.122186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.122192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.705 [2024-11-17 01:37:34.122200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:17:25.705 [2024-11-17 01:37:34.122206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.122285] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aec450e1-c0d4-4563-8d98-0437163278cc 00:17:25.705 [2024-11-17 01:37:34.123230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.123252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:25.705 [2024-11-17 01:37:34.123261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:25.705 [2024-11-17 01:37:34.123268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.127963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.128078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.705 [2024-11-17 01:37:34.128092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:17:25.705 [2024-11-17 01:37:34.128100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.128170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.128178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.705 [2024-11-17 01:37:34.128184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:25.705 [2024-11-17 01:37:34.128194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.128232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.128242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.705 [2024-11-17 01:37:34.128248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:25.705 [2024-11-17 01:37:34.128257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.128274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.705 [2024-11-17 01:37:34.131137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.131230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.705 [2024-11-17 01:37:34.131246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:17:25.705 [2024-11-17 01:37:34.131252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.131280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.705 [2024-11-17 01:37:34.131287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.705 [2024-11-17 01:37:34.131294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.705 [2024-11-17 01:37:34.131300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.705 [2024-11-17 01:37:34.131314] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:25.705 [2024-11-17 01:37:34.131419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.705 [2024-11-17 01:37:34.131431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.705 [2024-11-17 01:37:34.131440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.706 [2024-11-17 01:37:34.131449] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131457] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:25.706 [2024-11-17 01:37:34.131470] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.706 [2024-11-17 01:37:34.131478] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.706 [2024-11-17 01:37:34.131484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.706 [2024-11-17 01:37:34.131492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.706 [2024-11-17 01:37:34.131497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.706 [2024-11-17 01:37:34.131504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:25.706 [2024-11-17 01:37:34.131515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.706 [2024-11-17 01:37:34.131581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.706 [2024-11-17 01:37:34.131588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.706 [2024-11-17 01:37:34.131595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:25.706 [2024-11-17 01:37:34.131600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.706 [2024-11-17 01:37:34.131685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.706 [2024-11-17 01:37:34.131693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.706 [2024-11-17 01:37:34.131701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.706 [2024-11-17 01:37:34.131720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.706 [2024-11-17 01:37:34.131738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.706 [2024-11-17 01:37:34.131751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.706 [2024-11-17 01:37:34.131756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:25.706 [2024-11-17 01:37:34.131763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.706 [2024-11-17 01:37:34.131768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.706 [2024-11-17 01:37:34.131774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:25.706 [2024-11-17 01:37:34.131779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.706 [2024-11-17 01:37:34.131807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.706 [2024-11-17 01:37:34.131827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.706 [2024-11-17 01:37:34.131844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.706 [2024-11-17 01:37:34.131861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.706 [2024-11-17 01:37:34.131878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.706 [2024-11-17 01:37:34.131898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.706 [2024-11-17 01:37:34.131909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.706 [2024-11-17 01:37:34.131915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:25.706 [2024-11-17 01:37:34.131922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.706 [2024-11-17 01:37:34.131926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.706 [2024-11-17 01:37:34.131933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:25.706 [2024-11-17 01:37:34.131938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.706 [2024-11-17 01:37:34.131950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:25.706 [2024-11-17 01:37:34.131957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131962] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.706 [2024-11-17 01:37:34.131969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.706 [2024-11-17 01:37:34.131974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.706 [2024-11-17 01:37:34.131983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.706 [2024-11-17 01:37:34.131989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.706 [2024-11-17 01:37:34.131997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.706 [2024-11-17 01:37:34.132002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.706 [2024-11-17 01:37:34.132009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.706 [2024-11-17 01:37:34.132014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.706 [2024-11-17 01:37:34.132020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.706 [2024-11-17 01:37:34.132027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.706 [2024-11-17 01:37:34.132036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:25.706 [2024-11-17 01:37:34.132050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:25.706 [2024-11-17 01:37:34.132056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:25.706 [2024-11-17 01:37:34.132063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:25.706 [2024-11-17 01:37:34.132068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:25.706 [2024-11-17 01:37:34.132074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:25.706 [2024-11-17 01:37:34.132079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:25.706 [2024-11-17 01:37:34.132086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:25.706 [2024-11-17 01:37:34.132091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:25.706 [2024-11-17 01:37:34.132099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:25.706 [2024-11-17 01:37:34.132130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.706 [2024-11-17 01:37:34.132137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.706 [2024-11-17 01:37:34.132150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.706 [2024-11-17 01:37:34.132156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.706 [2024-11-17 01:37:34.132163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.706 [2024-11-17 01:37:34.132169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.706 [2024-11-17 01:37:34.132176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.706 [2024-11-17 01:37:34.132182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:25.706 [2024-11-17 01:37:34.132189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.706 [2024-11-17 01:37:34.132229] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:25.706 [2024-11-17 01:37:34.132240] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:29.010 [2024-11-17 01:37:37.296667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.296756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:29.011 [2024-11-17 01:37:37.296775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3164.424 ms 00:17:29.011 [2024-11-17 01:37:37.296787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.328514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.328579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.011 [2024-11-17 01:37:37.328593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.463 ms 00:17:29.011 [2024-11-17 01:37:37.328606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.328744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.328759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.011 [2024-11-17 01:37:37.328769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:29.011 [2024-11-17 01:37:37.328783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.363885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.363935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.011 [2024-11-17 01:37:37.363948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.031 ms 00:17:29.011 [2024-11-17 01:37:37.363958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.363991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.364007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.011 [2024-11-17 01:37:37.364016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.011 [2024-11-17 01:37:37.364027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.364560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.364587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.011 [2024-11-17 01:37:37.364600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:17:29.011 [2024-11-17 01:37:37.364611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.364727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.364740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.011 [2024-11-17 01:37:37.364755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:29.011 [2024-11-17 01:37:37.364767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.381981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.382204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.011 [2024-11-17 01:37:37.382225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 00:17:29.011 [2024-11-17 01:37:37.382236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.011 [2024-11-17 01:37:37.395348] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:29.011 [2024-11-17 01:37:37.399125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.011 [2024-11-17 01:37:37.399169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.011 [2024-11-17 01:37:37.399183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:17:29.011 [2024-11-17 01:37:37.399192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.272 [2024-11-17 01:37:37.508862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.272 [2024-11-17 01:37:37.508926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:29.272 [2024-11-17 01:37:37.508946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.631 ms 00:17:29.272 [2024-11-17 01:37:37.508955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.272 [2024-11-17 01:37:37.509165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.272 [2024-11-17 01:37:37.509183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.272 [2024-11-17 01:37:37.509199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:29.272 [2024-11-17 01:37:37.509208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.272 [2024-11-17 01:37:37.535695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.272 [2024-11-17 01:37:37.535747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:29.272 [2024-11-17 01:37:37.535765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.409 ms 00:17:29.272 [2024-11-17 01:37:37.535775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.272 [2024-11-17 01:37:37.561134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.272 [2024-11-17 01:37:37.561189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:29.272 [2024-11-17 01:37:37.561204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:17:29.272 [2024-11-17 01:37:37.561212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.272 [2024-11-17 01:37:37.561873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.272 [2024-11-17 01:37:37.561897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.272 [2024-11-17 01:37:37.561910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:17:29.273 [2024-11-17 01:37:37.561919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.644523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.644573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:29.273 [2024-11-17 01:37:37.644594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.552 ms 00:17:29.273 [2024-11-17 01:37:37.644603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.672271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.672322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:29.273 [2024-11-17 01:37:37.672337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.567 ms 00:17:29.273 [2024-11-17 01:37:37.672346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.698480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.698526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:29.273 [2024-11-17 01:37:37.698540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.079 ms 00:17:29.273 [2024-11-17 01:37:37.698548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.725160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.725206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.273 [2024-11-17 01:37:37.725221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.558 ms 00:17:29.273 [2024-11-17 01:37:37.725230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.725284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.725293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.273 [2024-11-17 01:37:37.725309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.273 [2024-11-17 01:37:37.725317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.725412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.273 [2024-11-17 01:37:37.725422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.273 [2024-11-17 01:37:37.725436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:29.273 [2024-11-17 01:37:37.725446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.273 [2024-11-17 01:37:37.726633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3604.699 ms, result 0 00:17:29.534 { 00:17:29.534 "name": "ftl0", 00:17:29.534 "uuid": "aec450e1-c0d4-4563-8d98-0437163278cc" 00:17:29.534 } 00:17:29.534 01:37:37 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:29.534 01:37:37 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:29.534 01:37:37 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:29.534 01:37:37 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:29.796 [2024-11-17 01:37:38.153981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.154044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.796 [2024-11-17 01:37:38.154059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.796 [2024-11-17 01:37:38.154077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.154103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.796 [2024-11-17 01:37:38.157117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.157342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.796 [2024-11-17 01:37:38.157369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:17:29.796 [2024-11-17 01:37:38.157379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.157680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.157693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.796 [2024-11-17 01:37:38.157709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:29.796 [2024-11-17 01:37:38.157718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.160989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.161016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.796 [2024-11-17 01:37:38.161029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:17:29.796 [2024-11-17 01:37:38.161037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.167336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.167379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.796 [2024-11-17 01:37:38.167397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:17:29.796 [2024-11-17 01:37:38.167405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.194027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.194076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.796 [2024-11-17 01:37:38.194091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.546 ms 00:17:29.796 [2024-11-17 01:37:38.194099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.212254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.212303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.796 [2024-11-17 01:37:38.212318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.100 ms 00:17:29.796 [2024-11-17 01:37:38.212327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.212499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.212514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.796 [2024-11-17 01:37:38.212526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:29.796 [2024-11-17 01:37:38.212534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.796 [2024-11-17 01:37:38.238523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.796 [2024-11-17 01:37:38.238566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.796 [2024-11-17 01:37:38.238580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.965 ms 00:17:29.796 [2024-11-17 01:37:38.238588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.059 [2024-11-17 01:37:38.263829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.059 [2024-11-17 01:37:38.263872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.059 [2024-11-17 01:37:38.263887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.187 ms 00:17:30.059 [2024-11-17 01:37:38.263895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.059 [2024-11-17 01:37:38.289197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.059 [2024-11-17 01:37:38.289250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.059 [2024-11-17 01:37:38.289264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.246 ms 00:17:30.059 [2024-11-17 01:37:38.289272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.059 [2024-11-17 01:37:38.314345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.059 [2024-11-17 01:37:38.314390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.059 [2024-11-17 01:37:38.314404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.962 ms 00:17:30.059 [2024-11-17 01:37:38.314412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.059 [2024-11-17 01:37:38.314461] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.059 [2024-11-17 01:37:38.314477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.059 [2024-11-17 01:37:38.314490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.059 [2024-11-17 01:37:38.314498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.059 [2024-11-17 01:37:38.314508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.059 [2024-11-17 01:37:38.314516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.059 [2024-11-17 01:37:38.314526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.314995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.060 [2024-11-17 01:37:38.315306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.061 [2024-11-17 01:37:38.315469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.061 [2024-11-17 01:37:38.315481] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aec450e1-c0d4-4563-8d98-0437163278cc 00:17:30.061 [2024-11-17 01:37:38.315489] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.061 [2024-11-17 01:37:38.315501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.061 [2024-11-17 01:37:38.315508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.061 [2024-11-17 01:37:38.315521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.061 [2024-11-17 01:37:38.315528] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.061 [2024-11-17 01:37:38.315539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.061 [2024-11-17 01:37:38.315548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.061 [2024-11-17 01:37:38.315556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.061 [2024-11-17 01:37:38.315563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.061 [2024-11-17 01:37:38.315572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.061 [2024-11-17 01:37:38.315579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.061 [2024-11-17 01:37:38.315589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:17:30.061 [2024-11-17 01:37:38.315598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.329466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.061 [2024-11-17 01:37:38.329652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.061 [2024-11-17 01:37:38.329678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.819 ms 00:17:30.061 [2024-11-17 01:37:38.329687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.330141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.061 [2024-11-17 01:37:38.330163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.061 [2024-11-17 01:37:38.330175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:17:30.061 [2024-11-17 01:37:38.330186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.376510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.061 [2024-11-17 01:37:38.376695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.061 [2024-11-17 01:37:38.376721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.061 [2024-11-17 01:37:38.376730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.376824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.061 [2024-11-17 01:37:38.376835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.061 [2024-11-17 01:37:38.376847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.061 [2024-11-17 01:37:38.376858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.376945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.061 [2024-11-17 01:37:38.376959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.061 [2024-11-17 01:37:38.376971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.061 [2024-11-17 01:37:38.376980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.377005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.061 [2024-11-17 01:37:38.377015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.061 [2024-11-17 01:37:38.377026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.061 [2024-11-17 01:37:38.377035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.061 [2024-11-17 01:37:38.460219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.061 [2024-11-17 01:37:38.460272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.061 [2024-11-17 01:37:38.460289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.061 [2024-11-17 01:37:38.460297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.323 [2024-11-17 01:37:38.528196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.323 [2024-11-17 01:37:38.528342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.323 [2024-11-17 01:37:38.528427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.323 [2024-11-17 01:37:38.528562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.323 [2024-11-17 01:37:38.528628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.323 [2024-11-17 01:37:38.528707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.528715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.528769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.323 [2024-11-17 01:37:38.528781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.323 [2024-11-17 01:37:38.529099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.323 [2024-11-17 01:37:38.529146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.323 [2024-11-17 01:37:38.529337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.304 ms, result 0 00:17:30.323 true 00:17:30.323 01:37:38 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74536 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74536 ']' 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74536 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74536 00:17:30.323 killing process with pid 74536 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74536' 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74536 00:17:30.323 01:37:38 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74536 00:17:34.526 01:37:42 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:38.734 262144+0 records in 00:17:38.734 262144+0 records out 00:17:38.734 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.93124 s, 273 MB/s 00:17:38.734 01:37:46 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:40.120 01:37:48 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.120 [2024-11-17 01:37:48.529871] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:40.120 [2024-11-17 01:37:48.530688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74766 ] 00:17:40.381 [2024-11-17 01:37:48.690943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.381 [2024-11-17 01:37:48.792034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.642 [2024-11-17 01:37:49.076173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.642 [2024-11-17 01:37:49.076252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.904 [2024-11-17 01:37:49.237653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.237713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.904 [2024-11-17 01:37:49.237734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.904 [2024-11-17 01:37:49.237743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.237824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.237836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.904 [2024-11-17 01:37:49.237849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:40.904 [2024-11-17 01:37:49.237857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.237878] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.904 [2024-11-17 01:37:49.238584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.904 [2024-11-17 01:37:49.238615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.238626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.904 [2024-11-17 01:37:49.238636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:40.904 [2024-11-17 01:37:49.238644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.240492] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.904 [2024-11-17 01:37:49.254631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.254675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.904 [2024-11-17 01:37:49.254689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.142 ms 00:17:40.904 [2024-11-17 01:37:49.254697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.254777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.254808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.904 [2024-11-17 01:37:49.254820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:40.904 [2024-11-17 01:37:49.254828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.262744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.262805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.904 [2024-11-17 01:37:49.262817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.839 ms 00:17:40.904 [2024-11-17 01:37:49.262825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.262909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.262920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.904 [2024-11-17 01:37:49.262930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:40.904 [2024-11-17 01:37:49.262938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.262982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.262992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.904 [2024-11-17 01:37:49.263000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.904 [2024-11-17 01:37:49.263009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.263031] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.904 [2024-11-17 01:37:49.266980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.267017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.904 [2024-11-17 01:37:49.267027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:17:40.904 [2024-11-17 01:37:49.267038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.267072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.267081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.904 [2024-11-17 01:37:49.267090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:40.904 [2024-11-17 01:37:49.267098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.267148] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.904 [2024-11-17 01:37:49.267172] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.904 [2024-11-17 01:37:49.267210] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.904 [2024-11-17 01:37:49.267230] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.904 [2024-11-17 01:37:49.267337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.904 [2024-11-17 01:37:49.267350] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.904 [2024-11-17 01:37:49.267361] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.904 [2024-11-17 01:37:49.267372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:40.904 [2024-11-17 01:37:49.267398] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.904 [2024-11-17 01:37:49.267406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.904 [2024-11-17 01:37:49.267415] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.904 [2024-11-17 01:37:49.267426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.267434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.904 [2024-11-17 01:37:49.267444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:40.904 [2024-11-17 01:37:49.267451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.267534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.904 [2024-11-17 01:37:49.267544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.904 [2024-11-17 01:37:49.267552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:40.904 [2024-11-17 01:37:49.267559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.904 [2024-11-17 01:37:49.267677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.904 [2024-11-17 01:37:49.267691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.904 [2024-11-17 01:37:49.267700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.904 [2024-11-17 01:37:49.267723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.904 [2024-11-17 01:37:49.267747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.904 [2024-11-17 01:37:49.267761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.904 [2024-11-17 01:37:49.267768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:40.904 [2024-11-17 01:37:49.267774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.904 [2024-11-17 01:37:49.267783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.904 [2024-11-17 01:37:49.267813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:40.904 [2024-11-17 01:37:49.267828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.904 [2024-11-17 01:37:49.267842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.904 [2024-11-17 01:37:49.267863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.904 [2024-11-17 01:37:49.267885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.904 [2024-11-17 01:37:49.267906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.904 [2024-11-17 01:37:49.267927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.904 [2024-11-17 01:37:49.267943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.904 [2024-11-17 01:37:49.267950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:40.904 [2024-11-17 01:37:49.267956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.904 [2024-11-17 01:37:49.267963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.904 [2024-11-17 01:37:49.267969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:40.904 [2024-11-17 01:37:49.267976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.904 [2024-11-17 01:37:49.267983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.904 [2024-11-17 01:37:49.267990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:40.904 [2024-11-17 01:37:49.267997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.904 [2024-11-17 01:37:49.268004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.904 [2024-11-17 01:37:49.268010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:40.904 [2024-11-17 01:37:49.268018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.905 [2024-11-17 01:37:49.268025] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.905 [2024-11-17 01:37:49.268033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.905 [2024-11-17 01:37:49.268042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.905 [2024-11-17 01:37:49.268049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.905 [2024-11-17 01:37:49.268057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.905 [2024-11-17 01:37:49.268064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.905 [2024-11-17 01:37:49.268070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.905 [2024-11-17 01:37:49.268077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.905 [2024-11-17 01:37:49.268083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.905 [2024-11-17 01:37:49.268090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.905 [2024-11-17 01:37:49.268098] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.905 [2024-11-17 01:37:49.268109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:40.905 [2024-11-17 01:37:49.268126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:40.905 [2024-11-17 01:37:49.268143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:40.905 [2024-11-17 01:37:49.268151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:40.905 [2024-11-17 01:37:49.268159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:40.905 [2024-11-17 01:37:49.268166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:40.905 [2024-11-17 01:37:49.268173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:40.905 [2024-11-17 01:37:49.268182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:40.905 [2024-11-17 01:37:49.268189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:40.905 [2024-11-17 01:37:49.268196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:40.905 [2024-11-17 01:37:49.268231] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.905 [2024-11-17 01:37:49.268243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.905 [2024-11-17 01:37:49.268261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.905 [2024-11-17 01:37:49.268269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.905 [2024-11-17 01:37:49.268277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.905 [2024-11-17 01:37:49.268285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.268293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.905 [2024-11-17 01:37:49.268301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:17:40.905 [2024-11-17 01:37:49.268309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.300521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.300573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.905 [2024-11-17 01:37:49.300586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.167 ms 00:17:40.905 [2024-11-17 01:37:49.300596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.300688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.300698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.905 [2024-11-17 01:37:49.300708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:40.905 [2024-11-17 01:37:49.300717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.343036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.343087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.905 [2024-11-17 01:37:49.343102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.260 ms 00:17:40.905 [2024-11-17 01:37:49.343112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.343161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.343171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.905 [2024-11-17 01:37:49.343181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.905 [2024-11-17 01:37:49.343193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.343783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.343843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.905 [2024-11-17 01:37:49.343855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:17:40.905 [2024-11-17 01:37:49.343864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.344043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.905 [2024-11-17 01:37:49.344055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.905 [2024-11-17 01:37:49.344064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:40.905 [2024-11-17 01:37:49.344081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.905 [2024-11-17 01:37:49.359756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.360046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.167 [2024-11-17 01:37:49.360071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.655 ms 00:17:41.167 [2024-11-17 01:37:49.360080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.374409] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:41.167 [2024-11-17 01:37:49.374459] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.167 [2024-11-17 01:37:49.374474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.374483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.167 [2024-11-17 01:37:49.374493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.285 ms 00:17:41.167 [2024-11-17 01:37:49.374501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.400609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.400659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.167 [2024-11-17 01:37:49.400679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.054 ms 00:17:41.167 [2024-11-17 01:37:49.400687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.413693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.413749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.167 [2024-11-17 01:37:49.413761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:17:41.167 [2024-11-17 01:37:49.413770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.426368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.426413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.167 [2024-11-17 01:37:49.426425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.381 ms 00:17:41.167 [2024-11-17 01:37:49.426432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.427096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.427124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.167 [2024-11-17 01:37:49.427135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:17:41.167 [2024-11-17 01:37:49.427145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.492317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.492373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.167 [2024-11-17 01:37:49.492387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.149 ms 00:17:41.167 [2024-11-17 01:37:49.492403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.504506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:41.167 [2024-11-17 01:37:49.507747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.507814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.167 [2024-11-17 01:37:49.507827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.276 ms 00:17:41.167 [2024-11-17 01:37:49.507837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.507924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.507936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.167 [2024-11-17 01:37:49.507948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:41.167 [2024-11-17 01:37:49.507958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.508031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.508043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.167 [2024-11-17 01:37:49.508052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:41.167 [2024-11-17 01:37:49.508060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.508082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.508092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.167 [2024-11-17 01:37:49.508101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.167 [2024-11-17 01:37:49.508109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.508143] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.167 [2024-11-17 01:37:49.508156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.508167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.167 [2024-11-17 01:37:49.508176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.167 [2024-11-17 01:37:49.508184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.534052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.534217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.167 [2024-11-17 01:37:49.534284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.849 ms 00:17:41.167 [2024-11-17 01:37:49.534309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 01:37:49.534818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 01:37:49.534912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.168 [2024-11-17 01:37:49.535021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:41.168 [2024-11-17 01:37:49.535046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 01:37:49.536350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.240 ms, result 0 00:17:42.110  [2024-11-17T01:37:51.957Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:37:52.900Z] Copying: 35/1024 [MB] (25 MBps) [2024-11-17T01:37:53.880Z] Copying: 62/1024 [MB] (27 MBps) [2024-11-17T01:37:54.822Z] Copying: 91/1024 [MB] (28 MBps) [2024-11-17T01:37:55.765Z] Copying: 120/1024 [MB] (29 MBps) [2024-11-17T01:37:56.709Z] Copying: 148/1024 [MB] (27 MBps) [2024-11-17T01:37:57.653Z] Copying: 176/1024 [MB] (28 MBps) [2024-11-17T01:37:58.598Z] Copying: 214/1024 [MB] (37 MBps) [2024-11-17T01:37:59.983Z] Copying: 244/1024 [MB] (30 MBps) [2024-11-17T01:38:00.556Z] Copying: 268/1024 [MB] (24 MBps) [2024-11-17T01:38:01.942Z] Copying: 291/1024 [MB] (22 MBps) [2024-11-17T01:38:02.884Z] Copying: 321/1024 [MB] (30 MBps) [2024-11-17T01:38:03.826Z] Copying: 352/1024 [MB] (30 MBps) [2024-11-17T01:38:04.769Z] Copying: 379/1024 [MB] (26 MBps) [2024-11-17T01:38:05.710Z] Copying: 408/1024 [MB] (29 MBps) [2024-11-17T01:38:06.654Z] Copying: 438/1024 [MB] (29 MBps) [2024-11-17T01:38:07.598Z] Copying: 469/1024 [MB] (30 MBps) [2024-11-17T01:38:08.986Z] Copying: 481/1024 [MB] (12 MBps) [2024-11-17T01:38:09.561Z] Copying: 499/1024 [MB] (18 MBps) [2024-11-17T01:38:10.950Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-17T01:38:11.895Z] Copying: 523/1024 [MB] (12 MBps) [2024-11-17T01:38:12.836Z] Copying: 549/1024 [MB] (25 MBps) [2024-11-17T01:38:13.780Z] Copying: 575/1024 [MB] (26 MBps) [2024-11-17T01:38:14.724Z] Copying: 603/1024 [MB] (27 MBps) [2024-11-17T01:38:15.669Z] Copying: 630/1024 [MB] (27 MBps) [2024-11-17T01:38:16.612Z] Copying: 658/1024 [MB] (28 MBps) [2024-11-17T01:38:17.557Z] Copying: 685/1024 [MB] (26 MBps) [2024-11-17T01:38:18.944Z] Copying: 731/1024 [MB] (45 MBps) [2024-11-17T01:38:19.886Z] Copying: 759/1024 [MB] (27 MBps) [2024-11-17T01:38:20.831Z] Copying: 788/1024 [MB] (28 MBps) [2024-11-17T01:38:21.776Z] Copying: 802/1024 [MB] (13 MBps) [2024-11-17T01:38:22.720Z] Copying: 819/1024 [MB] (17 MBps) [2024-11-17T01:38:23.662Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-17T01:38:24.606Z] Copying: 853/1024 [MB] (24 MBps) [2024-11-17T01:38:25.991Z] Copying: 899/1024 [MB] (46 MBps) [2024-11-17T01:38:26.565Z] Copying: 927/1024 [MB] (27 MBps) [2024-11-17T01:38:27.952Z] Copying: 954/1024 [MB] (26 MBps) [2024-11-17T01:38:28.896Z] Copying: 991/1024 [MB] (37 MBps) [2024-11-17T01:38:29.159Z] Copying: 1009/1024 [MB] (17 MBps) [2024-11-17T01:38:29.159Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-17 01:38:28.936389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.936488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.700 [2024-11-17 01:38:28.936549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:20.700 [2024-11-17 01:38:28.936569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.936620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.700 [2024-11-17 01:38:28.938772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.938886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.700 [2024-11-17 01:38:28.938940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:18:20.700 [2024-11-17 01:38:28.938958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.940286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.940372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.700 [2024-11-17 01:38:28.940420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:18:20.700 [2024-11-17 01:38:28.940463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.951522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.951625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.700 [2024-11-17 01:38:28.951685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.033 ms 00:18:20.700 [2024-11-17 01:38:28.951704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.956683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.956777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.700 [2024-11-17 01:38:28.956837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:18:20.700 [2024-11-17 01:38:28.956847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.975482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.975508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.700 [2024-11-17 01:38:28.975516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.594 ms 00:18:20.700 [2024-11-17 01:38:28.975523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.987716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.987744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.700 [2024-11-17 01:38:28.987753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.166 ms 00:18:20.700 [2024-11-17 01:38:28.987760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:28.987856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:28.987865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.700 [2024-11-17 01:38:28.987875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:20.700 [2024-11-17 01:38:28.987881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:29.006215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:29.006238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:20.700 [2024-11-17 01:38:29.006246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:18:20.700 [2024-11-17 01:38:29.006252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:29.024601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:29.024625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:20.700 [2024-11-17 01:38:29.024639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:18:20.700 [2024-11-17 01:38:29.024645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:29.042443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:29.042467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.700 [2024-11-17 01:38:29.042474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.772 ms 00:18:20.700 [2024-11-17 01:38:29.042480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:29.060035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.700 [2024-11-17 01:38:29.060131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.700 [2024-11-17 01:38:29.060144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.512 ms 00:18:20.700 [2024-11-17 01:38:29.060149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.700 [2024-11-17 01:38:29.060170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.700 [2024-11-17 01:38:29.060181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.700 [2024-11-17 01:38:29.060279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.701 [2024-11-17 01:38:29.060750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.701 [2024-11-17 01:38:29.060759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aec450e1-c0d4-4563-8d98-0437163278cc 00:18:20.701 [2024-11-17 01:38:29.060765] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.701 [2024-11-17 01:38:29.060772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.701 [2024-11-17 01:38:29.060777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.701 [2024-11-17 01:38:29.060783] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.702 [2024-11-17 01:38:29.060788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.702 [2024-11-17 01:38:29.060811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.702 [2024-11-17 01:38:29.060816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.702 [2024-11-17 01:38:29.060825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.702 [2024-11-17 01:38:29.060831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.702 [2024-11-17 01:38:29.060837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.702 [2024-11-17 01:38:29.060842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.702 [2024-11-17 01:38:29.060848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:18:20.702 [2024-11-17 01:38:29.060854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.070303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.702 [2024-11-17 01:38:29.070328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.702 [2024-11-17 01:38:29.070335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.437 ms 00:18:20.702 [2024-11-17 01:38:29.070341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.070606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.702 [2024-11-17 01:38:29.070614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.702 [2024-11-17 01:38:29.070620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:20.702 [2024-11-17 01:38:29.070625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.096363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.702 [2024-11-17 01:38:29.096390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.702 [2024-11-17 01:38:29.096397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.702 [2024-11-17 01:38:29.096403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.096439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.702 [2024-11-17 01:38:29.096446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.702 [2024-11-17 01:38:29.096452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.702 [2024-11-17 01:38:29.096458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.096503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.702 [2024-11-17 01:38:29.096510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.702 [2024-11-17 01:38:29.096516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.702 [2024-11-17 01:38:29.096522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.096532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.702 [2024-11-17 01:38:29.096538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.702 [2024-11-17 01:38:29.096544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.702 [2024-11-17 01:38:29.096550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.702 [2024-11-17 01:38:29.155363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.155504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.961 [2024-11-17 01:38:29.155517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.155529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.961 [2024-11-17 01:38:29.203430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.961 [2024-11-17 01:38:29.203505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.961 [2024-11-17 01:38:29.203549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.961 [2024-11-17 01:38:29.203643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.961 [2024-11-17 01:38:29.203685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.961 [2024-11-17 01:38:29.203732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.961 [2024-11-17 01:38:29.203776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.961 [2024-11-17 01:38:29.203782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.961 [2024-11-17 01:38:29.203809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.961 [2024-11-17 01:38:29.203898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.486 ms, result 0 00:18:21.986 00:18:21.986 00:18:21.986 01:38:30 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:21.986 [2024-11-17 01:38:30.124657] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:21.986 [2024-11-17 01:38:30.124777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75200 ] 00:18:21.986 [2024-11-17 01:38:30.283811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.986 [2024-11-17 01:38:30.401128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.252 [2024-11-17 01:38:30.689000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.252 [2024-11-17 01:38:30.689077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.514 [2024-11-17 01:38:30.851347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.851408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.514 [2024-11-17 01:38:30.851427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.514 [2024-11-17 01:38:30.851435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.851490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.851501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.514 [2024-11-17 01:38:30.851512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:22.514 [2024-11-17 01:38:30.851520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.851541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.514 [2024-11-17 01:38:30.852307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.514 [2024-11-17 01:38:30.852332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.852341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.514 [2024-11-17 01:38:30.852350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:18:22.514 [2024-11-17 01:38:30.852358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.854097] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.514 [2024-11-17 01:38:30.868600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.868649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.514 [2024-11-17 01:38:30.868663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.504 ms 00:18:22.514 [2024-11-17 01:38:30.868672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.868753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.868764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.514 [2024-11-17 01:38:30.868772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:22.514 [2024-11-17 01:38:30.868780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.876991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.877033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.514 [2024-11-17 01:38:30.877044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.107 ms 00:18:22.514 [2024-11-17 01:38:30.877052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.877139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.877148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.514 [2024-11-17 01:38:30.877156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:22.514 [2024-11-17 01:38:30.877164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.877210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.877221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.514 [2024-11-17 01:38:30.877231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:22.514 [2024-11-17 01:38:30.877240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.877263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.514 [2024-11-17 01:38:30.881299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.881339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.514 [2024-11-17 01:38:30.881351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.041 ms 00:18:22.514 [2024-11-17 01:38:30.881363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.881398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.881407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.514 [2024-11-17 01:38:30.881416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.514 [2024-11-17 01:38:30.881424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.881477] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.514 [2024-11-17 01:38:30.881501] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.514 [2024-11-17 01:38:30.881539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.514 [2024-11-17 01:38:30.881561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.514 [2024-11-17 01:38:30.881668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.514 [2024-11-17 01:38:30.881682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.514 [2024-11-17 01:38:30.881694] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.514 [2024-11-17 01:38:30.881707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.514 [2024-11-17 01:38:30.881717] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.514 [2024-11-17 01:38:30.881726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.514 [2024-11-17 01:38:30.881734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.514 [2024-11-17 01:38:30.881743] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.514 [2024-11-17 01:38:30.881753] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.514 [2024-11-17 01:38:30.881766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.881774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.514 [2024-11-17 01:38:30.881782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:22.514 [2024-11-17 01:38:30.881817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.881902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.514 [2024-11-17 01:38:30.881914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.514 [2024-11-17 01:38:30.881923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:22.514 [2024-11-17 01:38:30.881931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.514 [2024-11-17 01:38:30.882036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.514 [2024-11-17 01:38:30.882050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.514 [2024-11-17 01:38:30.882062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.514 [2024-11-17 01:38:30.882073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.514 [2024-11-17 01:38:30.882081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.514 [2024-11-17 01:38:30.882089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.514 [2024-11-17 01:38:30.882096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.514 [2024-11-17 01:38:30.882105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.514 [2024-11-17 01:38:30.882114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.514 [2024-11-17 01:38:30.882130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.514 [2024-11-17 01:38:30.882137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.515 [2024-11-17 01:38:30.882144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.515 [2024-11-17 01:38:30.882150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.515 [2024-11-17 01:38:30.882157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.515 [2024-11-17 01:38:30.882167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.515 [2024-11-17 01:38:30.882182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.515 [2024-11-17 01:38:30.882201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.515 [2024-11-17 01:38:30.882223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.515 [2024-11-17 01:38:30.882244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.515 [2024-11-17 01:38:30.882264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.515 [2024-11-17 01:38:30.882287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.515 [2024-11-17 01:38:30.882306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.515 [2024-11-17 01:38:30.882320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.515 [2024-11-17 01:38:30.882328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.515 [2024-11-17 01:38:30.882335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.515 [2024-11-17 01:38:30.882341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.515 [2024-11-17 01:38:30.882348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.515 [2024-11-17 01:38:30.882355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.515 [2024-11-17 01:38:30.882370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.515 [2024-11-17 01:38:30.882377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882384] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.515 [2024-11-17 01:38:30.882392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.515 [2024-11-17 01:38:30.882399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.515 [2024-11-17 01:38:30.882415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.515 [2024-11-17 01:38:30.882424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.515 [2024-11-17 01:38:30.882431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.515 [2024-11-17 01:38:30.882438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.515 [2024-11-17 01:38:30.882445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.515 [2024-11-17 01:38:30.882452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.515 [2024-11-17 01:38:30.882461] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.515 [2024-11-17 01:38:30.882471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.515 [2024-11-17 01:38:30.882488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.515 [2024-11-17 01:38:30.882495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.515 [2024-11-17 01:38:30.882502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.515 [2024-11-17 01:38:30.882509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.515 [2024-11-17 01:38:30.882518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.515 [2024-11-17 01:38:30.882526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.515 [2024-11-17 01:38:30.882533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.515 [2024-11-17 01:38:30.882539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.515 [2024-11-17 01:38:30.882546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.515 [2024-11-17 01:38:30.882583] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.515 [2024-11-17 01:38:30.882593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.515 [2024-11-17 01:38:30.882611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.515 [2024-11-17 01:38:30.882617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.515 [2024-11-17 01:38:30.882625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.515 [2024-11-17 01:38:30.882632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.882641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.515 [2024-11-17 01:38:30.882650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:18:22.515 [2024-11-17 01:38:30.882657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.914662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.914713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.515 [2024-11-17 01:38:30.914726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.961 ms 00:18:22.515 [2024-11-17 01:38:30.914734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.914854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.914864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.515 [2024-11-17 01:38:30.914873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:22.515 [2024-11-17 01:38:30.914881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.959004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.959058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.515 [2024-11-17 01:38:30.959073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.062 ms 00:18:22.515 [2024-11-17 01:38:30.959082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.959131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.959142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.515 [2024-11-17 01:38:30.959152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.515 [2024-11-17 01:38:30.959164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.959744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.959769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.515 [2024-11-17 01:38:30.959780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:18:22.515 [2024-11-17 01:38:30.959825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.515 [2024-11-17 01:38:30.960008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.515 [2024-11-17 01:38:30.960021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.515 [2024-11-17 01:38:30.960030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:18:22.515 [2024-11-17 01:38:30.960045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.777 [2024-11-17 01:38:30.975811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.777 [2024-11-17 01:38:30.975854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.777 [2024-11-17 01:38:30.975869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:18:22.777 [2024-11-17 01:38:30.975878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.777 [2024-11-17 01:38:30.990342] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:22.777 [2024-11-17 01:38:30.990392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.777 [2024-11-17 01:38:30.990406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:30.990416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.778 [2024-11-17 01:38:30.990427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:18:22.778 [2024-11-17 01:38:30.990435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.016470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.016542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.778 [2024-11-17 01:38:31.016555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.973 ms 00:18:22.778 [2024-11-17 01:38:31.016564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.029781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.030049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.778 [2024-11-17 01:38:31.030073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.160 ms 00:18:22.778 [2024-11-17 01:38:31.030083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.043116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.043166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.778 [2024-11-17 01:38:31.043179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.856 ms 00:18:22.778 [2024-11-17 01:38:31.043186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.043868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.043898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.778 [2024-11-17 01:38:31.043909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:18:22.778 [2024-11-17 01:38:31.043920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.110357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.110417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.778 [2024-11-17 01:38:31.110441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.416 ms 00:18:22.778 [2024-11-17 01:38:31.110449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.122046] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:22.778 [2024-11-17 01:38:31.125470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.125675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.778 [2024-11-17 01:38:31.125696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.961 ms 00:18:22.778 [2024-11-17 01:38:31.125706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.125822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.125835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.778 [2024-11-17 01:38:31.125848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:22.778 [2024-11-17 01:38:31.125859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.125933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.125946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.778 [2024-11-17 01:38:31.125956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:22.778 [2024-11-17 01:38:31.125965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.125992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.126002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.778 [2024-11-17 01:38:31.126011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.778 [2024-11-17 01:38:31.126019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.126053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.778 [2024-11-17 01:38:31.126069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.126078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.778 [2024-11-17 01:38:31.126087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:22.778 [2024-11-17 01:38:31.126097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.152492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.152689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.778 [2024-11-17 01:38:31.152713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.371 ms 00:18:22.778 [2024-11-17 01:38:31.152730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.152843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.778 [2024-11-17 01:38:31.152856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.778 [2024-11-17 01:38:31.152867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:22.778 [2024-11-17 01:38:31.152876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.778 [2024-11-17 01:38:31.154122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.265 ms, result 0 00:18:24.166  [2024-11-17T01:38:33.569Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T01:38:34.512Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-17T01:38:35.456Z] Copying: 51/1024 [MB] (14 MBps) [2024-11-17T01:38:36.401Z] Copying: 66/1024 [MB] (14 MBps) [2024-11-17T01:38:37.345Z] Copying: 79/1024 [MB] (13 MBps) [2024-11-17T01:38:38.731Z] Copying: 91/1024 [MB] (11 MBps) [2024-11-17T01:38:39.685Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-17T01:38:40.628Z] Copying: 118/1024 [MB] (15 MBps) [2024-11-17T01:38:41.569Z] Copying: 129/1024 [MB] (10 MBps) [2024-11-17T01:38:42.512Z] Copying: 143/1024 [MB] (13 MBps) [2024-11-17T01:38:43.455Z] Copying: 159/1024 [MB] (15 MBps) [2024-11-17T01:38:44.398Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-17T01:38:45.350Z] Copying: 181/1024 [MB] (10 MBps) [2024-11-17T01:38:46.736Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-17T01:38:47.680Z] Copying: 202/1024 [MB] (10 MBps) [2024-11-17T01:38:48.617Z] Copying: 216/1024 [MB] (14 MBps) [2024-11-17T01:38:49.555Z] Copying: 241/1024 [MB] (25 MBps) [2024-11-17T01:38:50.492Z] Copying: 273/1024 [MB] (31 MBps) [2024-11-17T01:38:51.433Z] Copying: 295/1024 [MB] (21 MBps) [2024-11-17T01:38:52.377Z] Copying: 309/1024 [MB] (14 MBps) [2024-11-17T01:38:53.765Z] Copying: 325/1024 [MB] (16 MBps) [2024-11-17T01:38:54.709Z] Copying: 342/1024 [MB] (16 MBps) [2024-11-17T01:38:55.655Z] Copying: 357/1024 [MB] (14 MBps) [2024-11-17T01:38:56.600Z] Copying: 370/1024 [MB] (13 MBps) [2024-11-17T01:38:57.544Z] Copying: 388/1024 [MB] (18 MBps) [2024-11-17T01:38:58.487Z] Copying: 407/1024 [MB] (18 MBps) [2024-11-17T01:38:59.428Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-17T01:39:00.371Z] Copying: 433/1024 [MB] (15 MBps) [2024-11-17T01:39:01.757Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-17T01:39:02.700Z] Copying: 472/1024 [MB] (22 MBps) [2024-11-17T01:39:03.731Z] Copying: 487/1024 [MB] (14 MBps) [2024-11-17T01:39:04.682Z] Copying: 504/1024 [MB] (16 MBps) [2024-11-17T01:39:05.626Z] Copying: 523/1024 [MB] (19 MBps) [2024-11-17T01:39:06.569Z] Copying: 538/1024 [MB] (14 MBps) [2024-11-17T01:39:07.512Z] Copying: 553/1024 [MB] (14 MBps) [2024-11-17T01:39:08.457Z] Copying: 563/1024 [MB] (10 MBps) [2024-11-17T01:39:09.398Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-17T01:39:10.343Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-17T01:39:11.736Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-17T01:39:12.679Z] Copying: 606/1024 [MB] (10 MBps) [2024-11-17T01:39:13.622Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-17T01:39:14.566Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-17T01:39:15.509Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-17T01:39:16.452Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-17T01:39:17.395Z] Copying: 660/1024 [MB] (10 MBps) [2024-11-17T01:39:18.355Z] Copying: 680/1024 [MB] (20 MBps) [2024-11-17T01:39:19.739Z] Copying: 692/1024 [MB] (11 MBps) [2024-11-17T01:39:20.682Z] Copying: 713/1024 [MB] (21 MBps) [2024-11-17T01:39:21.625Z] Copying: 730/1024 [MB] (16 MBps) [2024-11-17T01:39:22.569Z] Copying: 746/1024 [MB] (16 MBps) [2024-11-17T01:39:23.513Z] Copying: 758/1024 [MB] (11 MBps) [2024-11-17T01:39:24.458Z] Copying: 777/1024 [MB] (19 MBps) [2024-11-17T01:39:25.401Z] Copying: 792/1024 [MB] (14 MBps) [2024-11-17T01:39:26.346Z] Copying: 809/1024 [MB] (16 MBps) [2024-11-17T01:39:27.733Z] Copying: 822/1024 [MB] (13 MBps) [2024-11-17T01:39:28.676Z] Copying: 842/1024 [MB] (19 MBps) [2024-11-17T01:39:29.620Z] Copying: 857/1024 [MB] (15 MBps) [2024-11-17T01:39:30.563Z] Copying: 887/1024 [MB] (29 MBps) [2024-11-17T01:39:31.507Z] Copying: 903/1024 [MB] (15 MBps) [2024-11-17T01:39:32.450Z] Copying: 915/1024 [MB] (11 MBps) [2024-11-17T01:39:33.393Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-17T01:39:34.778Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-17T01:39:35.350Z] Copying: 951/1024 [MB] (12 MBps) [2024-11-17T01:39:36.737Z] Copying: 967/1024 [MB] (16 MBps) [2024-11-17T01:39:37.681Z] Copying: 983/1024 [MB] (15 MBps) [2024-11-17T01:39:38.629Z] Copying: 1000/1024 [MB] (16 MBps) [2024-11-17T01:39:39.261Z] Copying: 1013/1024 [MB] (13 MBps) [2024-11-17T01:39:39.261Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 01:39:39.162663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.162751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.802 [2024-11-17 01:39:39.162770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.802 [2024-11-17 01:39:39.162780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.802 [2024-11-17 01:39:39.162825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.802 [2024-11-17 01:39:39.167490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.167561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.802 [2024-11-17 01:39:39.167588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:19:30.802 [2024-11-17 01:39:39.167601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.802 [2024-11-17 01:39:39.167979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.167999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.802 [2024-11-17 01:39:39.168015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:30.802 [2024-11-17 01:39:39.168028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.802 [2024-11-17 01:39:39.173648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.173681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.802 [2024-11-17 01:39:39.173696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.599 ms 00:19:30.802 [2024-11-17 01:39:39.173709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.802 [2024-11-17 01:39:39.181653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.181698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.802 [2024-11-17 01:39:39.181710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.914 ms 00:19:30.802 [2024-11-17 01:39:39.181720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.802 [2024-11-17 01:39:39.210276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.802 [2024-11-17 01:39:39.210331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.802 [2024-11-17 01:39:39.210346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.487 ms 00:19:30.802 [2024-11-17 01:39:39.210355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.803 [2024-11-17 01:39:39.228450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.803 [2024-11-17 01:39:39.228519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.803 [2024-11-17 01:39:39.228534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.044 ms 00:19:30.803 [2024-11-17 01:39:39.228544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.803 [2024-11-17 01:39:39.228694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.803 [2024-11-17 01:39:39.228716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.803 [2024-11-17 01:39:39.228727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:30.803 [2024-11-17 01:39:39.228735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.070 [2024-11-17 01:39:39.255392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.070 [2024-11-17 01:39:39.255641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.070 [2024-11-17 01:39:39.255664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.641 ms 00:19:31.070 [2024-11-17 01:39:39.255673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.070 [2024-11-17 01:39:39.281464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.070 [2024-11-17 01:39:39.281527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.070 [2024-11-17 01:39:39.281540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.646 ms 00:19:31.070 [2024-11-17 01:39:39.281548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.070 [2024-11-17 01:39:39.306992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.070 [2024-11-17 01:39:39.307038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.070 [2024-11-17 01:39:39.307051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.394 ms 00:19:31.070 [2024-11-17 01:39:39.307059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.070 [2024-11-17 01:39:39.332076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.070 [2024-11-17 01:39:39.332139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.070 [2024-11-17 01:39:39.332151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:19:31.070 [2024-11-17 01:39:39.332160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.070 [2024-11-17 01:39:39.332207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.070 [2024-11-17 01:39:39.332225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.070 [2024-11-17 01:39:39.332244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.071 [2024-11-17 01:39:39.332853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.332994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.072 [2024-11-17 01:39:39.333106] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.072 [2024-11-17 01:39:39.333120] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aec450e1-c0d4-4563-8d98-0437163278cc 00:19:31.072 [2024-11-17 01:39:39.333128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.072 [2024-11-17 01:39:39.333137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.072 [2024-11-17 01:39:39.333145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.072 [2024-11-17 01:39:39.333153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.072 [2024-11-17 01:39:39.333174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.072 [2024-11-17 01:39:39.333184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.072 [2024-11-17 01:39:39.333202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.072 [2024-11-17 01:39:39.333208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.072 [2024-11-17 01:39:39.333218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.072 [2024-11-17 01:39:39.333225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.072 [2024-11-17 01:39:39.333232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.072 [2024-11-17 01:39:39.333241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:19:31.072 [2024-11-17 01:39:39.333249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.347815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.072 [2024-11-17 01:39:39.348021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.072 [2024-11-17 01:39:39.348040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.542 ms 00:19:31.072 [2024-11-17 01:39:39.348049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.348485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.072 [2024-11-17 01:39:39.348506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.072 [2024-11-17 01:39:39.348517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:19:31.072 [2024-11-17 01:39:39.348533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.388078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.072 [2024-11-17 01:39:39.388129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.072 [2024-11-17 01:39:39.388143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.072 [2024-11-17 01:39:39.388153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.388233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.072 [2024-11-17 01:39:39.388244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.072 [2024-11-17 01:39:39.388254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.072 [2024-11-17 01:39:39.388269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.388360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.072 [2024-11-17 01:39:39.388373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.072 [2024-11-17 01:39:39.388382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.072 [2024-11-17 01:39:39.388391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.388408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.072 [2024-11-17 01:39:39.388418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.072 [2024-11-17 01:39:39.388428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.072 [2024-11-17 01:39:39.388437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.072 [2024-11-17 01:39:39.481310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.072 [2024-11-17 01:39:39.481374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.072 [2024-11-17 01:39:39.481390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.072 [2024-11-17 01:39:39.481401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.333 [2024-11-17 01:39:39.557393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.333 [2024-11-17 01:39:39.557513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.333 [2024-11-17 01:39:39.557618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.333 [2024-11-17 01:39:39.557775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.333 [2024-11-17 01:39:39.557881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.557944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.557960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.333 [2024-11-17 01:39:39.557971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.557981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.558039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.333 [2024-11-17 01:39:39.558052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.333 [2024-11-17 01:39:39.558062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.333 [2024-11-17 01:39:39.558071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.333 [2024-11-17 01:39:39.558238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.525 ms, result 0 00:19:31.905 00:19:31.905 00:19:32.167 01:39:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:34.713 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:34.713 01:39:42 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:34.713 [2024-11-17 01:39:42.699352] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:34.713 [2024-11-17 01:39:42.699729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75946 ] 00:19:34.713 [2024-11-17 01:39:42.865760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.713 [2024-11-17 01:39:43.011010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.974 [2024-11-17 01:39:43.342255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.974 [2024-11-17 01:39:43.342349] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.236 [2024-11-17 01:39:43.506870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.506936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.236 [2024-11-17 01:39:43.506960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:35.236 [2024-11-17 01:39:43.506971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.236 [2024-11-17 01:39:43.507034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.507048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.236 [2024-11-17 01:39:43.507061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:35.236 [2024-11-17 01:39:43.507069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.236 [2024-11-17 01:39:43.507092] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.236 [2024-11-17 01:39:43.507921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.236 [2024-11-17 01:39:43.507947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.507957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.236 [2024-11-17 01:39:43.507995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:19:35.236 [2024-11-17 01:39:43.508005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.236 [2024-11-17 01:39:43.510285] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.236 [2024-11-17 01:39:43.525731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.525784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.236 [2024-11-17 01:39:43.525817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:19:35.236 [2024-11-17 01:39:43.525827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.236 [2024-11-17 01:39:43.525918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.525930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.236 [2024-11-17 01:39:43.525940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.236 [2024-11-17 01:39:43.525949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.236 [2024-11-17 01:39:43.537730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.236 [2024-11-17 01:39:43.537776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.237 [2024-11-17 01:39:43.537808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.697 ms 00:19:35.237 [2024-11-17 01:39:43.537818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.537913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.537924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.237 [2024-11-17 01:39:43.537933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.237 [2024-11-17 01:39:43.537941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.538004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.538018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.237 [2024-11-17 01:39:43.538028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.237 [2024-11-17 01:39:43.538036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.538059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.237 [2024-11-17 01:39:43.542708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.542751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.237 [2024-11-17 01:39:43.542763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:19:35.237 [2024-11-17 01:39:43.542774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.542831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.542842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.237 [2024-11-17 01:39:43.542853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:35.237 [2024-11-17 01:39:43.542861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.542900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.237 [2024-11-17 01:39:43.542940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.237 [2024-11-17 01:39:43.542984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.237 [2024-11-17 01:39:43.543005] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.237 [2024-11-17 01:39:43.543120] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.237 [2024-11-17 01:39:43.543132] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.237 [2024-11-17 01:39:43.543143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.237 [2024-11-17 01:39:43.543154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543165] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:35.237 [2024-11-17 01:39:43.543181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.237 [2024-11-17 01:39:43.543190] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.237 [2024-11-17 01:39:43.543198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.237 [2024-11-17 01:39:43.543210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.543218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.237 [2024-11-17 01:39:43.543226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:35.237 [2024-11-17 01:39:43.543234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.543319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.237 [2024-11-17 01:39:43.543329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.237 [2024-11-17 01:39:43.543337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:35.237 [2024-11-17 01:39:43.543345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.237 [2024-11-17 01:39:43.543453] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.237 [2024-11-17 01:39:43.543467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.237 [2024-11-17 01:39:43.543477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.237 [2024-11-17 01:39:43.543505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.237 [2024-11-17 01:39:43.543527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.237 [2024-11-17 01:39:43.543572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.237 [2024-11-17 01:39:43.543579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:35.237 [2024-11-17 01:39:43.543587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.237 [2024-11-17 01:39:43.543594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.237 [2024-11-17 01:39:43.543603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:35.237 [2024-11-17 01:39:43.543618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.237 [2024-11-17 01:39:43.543634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.237 [2024-11-17 01:39:43.543659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.237 [2024-11-17 01:39:43.543682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.237 [2024-11-17 01:39:43.543703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.237 [2024-11-17 01:39:43.543725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.237 [2024-11-17 01:39:43.543746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.237 [2024-11-17 01:39:43.543761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.237 [2024-11-17 01:39:43.543768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:35.237 [2024-11-17 01:39:43.543778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.237 [2024-11-17 01:39:43.543804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.237 [2024-11-17 01:39:43.543812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:35.237 [2024-11-17 01:39:43.543819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.237 [2024-11-17 01:39:43.543833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:35.237 [2024-11-17 01:39:43.543840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.237 [2024-11-17 01:39:43.543856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.237 [2024-11-17 01:39:43.543863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.237 [2024-11-17 01:39:43.543879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.237 [2024-11-17 01:39:43.543886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.237 [2024-11-17 01:39:43.543894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.237 [2024-11-17 01:39:43.543901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.237 [2024-11-17 01:39:43.543907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.237 [2024-11-17 01:39:43.543914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.237 [2024-11-17 01:39:43.543922] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.237 [2024-11-17 01:39:43.543934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.237 [2024-11-17 01:39:43.543943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:35.237 [2024-11-17 01:39:43.543950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:35.237 [2024-11-17 01:39:43.543958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:35.237 [2024-11-17 01:39:43.543966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:35.237 [2024-11-17 01:39:43.543974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:35.237 [2024-11-17 01:39:43.543982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:35.237 [2024-11-17 01:39:43.543992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:35.238 [2024-11-17 01:39:43.544001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:35.238 [2024-11-17 01:39:43.544009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:35.238 [2024-11-17 01:39:43.544017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:35.238 [2024-11-17 01:39:43.544061] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.238 [2024-11-17 01:39:43.544074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.238 [2024-11-17 01:39:43.544091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.238 [2024-11-17 01:39:43.544099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.238 [2024-11-17 01:39:43.544106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.238 [2024-11-17 01:39:43.544114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.544123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.238 [2024-11-17 01:39:43.544131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:19:35.238 [2024-11-17 01:39:43.544139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.583018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.583072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.238 [2024-11-17 01:39:43.583085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.832 ms 00:19:35.238 [2024-11-17 01:39:43.583094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.583196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.583207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.238 [2024-11-17 01:39:43.583216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:35.238 [2024-11-17 01:39:43.583226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.639735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.639809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.238 [2024-11-17 01:39:43.639824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.443 ms 00:19:35.238 [2024-11-17 01:39:43.639834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.639890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.639901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.238 [2024-11-17 01:39:43.639911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.238 [2024-11-17 01:39:43.639924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.640681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.640732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.238 [2024-11-17 01:39:43.640746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:19:35.238 [2024-11-17 01:39:43.640754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.640952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.640968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.238 [2024-11-17 01:39:43.640978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:19:35.238 [2024-11-17 01:39:43.640994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.659458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.659505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.238 [2024-11-17 01:39:43.659521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:19:35.238 [2024-11-17 01:39:43.659531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.238 [2024-11-17 01:39:43.675091] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:35.238 [2024-11-17 01:39:43.675144] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.238 [2024-11-17 01:39:43.675159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.238 [2024-11-17 01:39:43.675169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.238 [2024-11-17 01:39:43.675181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.501 ms 00:19:35.238 [2024-11-17 01:39:43.675188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.701668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.701741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.500 [2024-11-17 01:39:43.701754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.420 ms 00:19:35.500 [2024-11-17 01:39:43.701763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.715313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.715362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.500 [2024-11-17 01:39:43.715375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 00:19:35.500 [2024-11-17 01:39:43.715383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.728296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.728345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.500 [2024-11-17 01:39:43.728358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.862 ms 00:19:35.500 [2024-11-17 01:39:43.728367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.729073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.729130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.500 [2024-11-17 01:39:43.729142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:19:35.500 [2024-11-17 01:39:43.729155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.803780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.804087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.500 [2024-11-17 01:39:43.804123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.602 ms 00:19:35.500 [2024-11-17 01:39:43.804133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.816411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:35.500 [2024-11-17 01:39:43.820478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.820526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.500 [2024-11-17 01:39:43.820540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:19:35.500 [2024-11-17 01:39:43.820549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.820648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.820660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.500 [2024-11-17 01:39:43.820670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:35.500 [2024-11-17 01:39:43.820682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.820765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.820777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.500 [2024-11-17 01:39:43.820813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:35.500 [2024-11-17 01:39:43.820823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.820847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.820858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.500 [2024-11-17 01:39:43.820870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:35.500 [2024-11-17 01:39:43.820879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.820921] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.500 [2024-11-17 01:39:43.820938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.820949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.500 [2024-11-17 01:39:43.820958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:35.500 [2024-11-17 01:39:43.820967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.848311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.848362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.500 [2024-11-17 01:39:43.848378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.324 ms 00:19:35.500 [2024-11-17 01:39:43.848393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.848491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.500 [2024-11-17 01:39:43.848502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.500 [2024-11-17 01:39:43.848511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:35.500 [2024-11-17 01:39:43.848520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.500 [2024-11-17 01:39:43.850664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.223 ms, result 0 00:19:36.445  [2024-11-17T01:39:46.278Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-17T01:39:47.212Z] Copying: 37/1024 [MB] (20 MBps) [2024-11-17T01:39:48.146Z] Copying: 57/1024 [MB] (20 MBps) [2024-11-17T01:39:49.080Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-17T01:39:50.017Z] Copying: 80/1024 [MB] (11 MBps) [2024-11-17T01:39:50.961Z] Copying: 94/1024 [MB] (13 MBps) [2024-11-17T01:39:51.900Z] Copying: 104/1024 [MB] (10 MBps) [2024-11-17T01:39:53.275Z] Copying: 118/1024 [MB] (13 MBps) [2024-11-17T01:39:54.210Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-17T01:39:55.145Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-17T01:39:56.079Z] Copying: 152/1024 [MB] (11 MBps) [2024-11-17T01:39:57.013Z] Copying: 168/1024 [MB] (15 MBps) [2024-11-17T01:39:57.945Z] Copying: 179/1024 [MB] (11 MBps) [2024-11-17T01:39:58.879Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-17T01:40:00.251Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-17T01:40:01.185Z] Copying: 214/1024 [MB] (11 MBps) [2024-11-17T01:40:02.119Z] Copying: 226/1024 [MB] (11 MBps) [2024-11-17T01:40:03.053Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-17T01:40:03.989Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-17T01:40:04.934Z] Copying: 260/1024 [MB] (11 MBps) [2024-11-17T01:40:05.873Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-17T01:40:07.251Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-17T01:40:08.196Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-17T01:40:09.136Z] Copying: 302/1024 [MB] (10 MBps) [2024-11-17T01:40:10.069Z] Copying: 313/1024 [MB] (10 MBps) [2024-11-17T01:40:11.003Z] Copying: 324/1024 [MB] (11 MBps) [2024-11-17T01:40:11.937Z] Copying: 335/1024 [MB] (11 MBps) [2024-11-17T01:40:12.886Z] Copying: 351/1024 [MB] (15 MBps) [2024-11-17T01:40:14.301Z] Copying: 362/1024 [MB] (11 MBps) [2024-11-17T01:40:14.872Z] Copying: 373/1024 [MB] (11 MBps) [2024-11-17T01:40:16.244Z] Copying: 392932/1048576 [kB] (10228 kBps) [2024-11-17T01:40:17.176Z] Copying: 398/1024 [MB] (14 MBps) [2024-11-17T01:40:18.109Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-17T01:40:19.042Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-17T01:40:19.974Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-17T01:40:20.912Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-17T01:40:22.287Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-17T01:40:23.222Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-17T01:40:24.162Z] Copying: 481/1024 [MB] (13 MBps) [2024-11-17T01:40:25.097Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-17T01:40:26.032Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-17T01:40:26.967Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-17T01:40:27.910Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-17T01:40:29.287Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-17T01:40:30.224Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-17T01:40:31.160Z] Copying: 561/1024 [MB] (12 MBps) [2024-11-17T01:40:32.098Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-17T01:40:33.046Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-17T01:40:33.987Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-17T01:40:34.932Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-17T01:40:35.868Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-17T01:40:37.249Z] Copying: 626/1024 [MB] (11 MBps) [2024-11-17T01:40:38.186Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-17T01:40:39.128Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-17T01:40:40.070Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-17T01:40:41.005Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-17T01:40:41.940Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-17T01:40:42.880Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-17T01:40:44.270Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-17T01:40:45.215Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-17T01:40:46.159Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-17T01:40:47.106Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-17T01:40:48.150Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-17T01:40:49.122Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-17T01:40:50.064Z] Copying: 787/1024 [MB] (30 MBps) [2024-11-17T01:40:51.008Z] Copying: 801/1024 [MB] (14 MBps) [2024-11-17T01:40:51.952Z] Copying: 818/1024 [MB] (16 MBps) [2024-11-17T01:40:52.894Z] Copying: 839/1024 [MB] (21 MBps) [2024-11-17T01:40:54.282Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-17T01:40:55.227Z] Copying: 872/1024 [MB] (17 MBps) [2024-11-17T01:40:56.171Z] Copying: 887/1024 [MB] (15 MBps) [2024-11-17T01:40:57.114Z] Copying: 900/1024 [MB] (12 MBps) [2024-11-17T01:40:58.058Z] Copying: 911/1024 [MB] (11 MBps) [2024-11-17T01:40:59.001Z] Copying: 922/1024 [MB] (10 MBps) [2024-11-17T01:40:59.944Z] Copying: 936/1024 [MB] (14 MBps) [2024-11-17T01:41:00.885Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-17T01:41:02.268Z] Copying: 960/1024 [MB] (12 MBps) [2024-11-17T01:41:03.212Z] Copying: 972/1024 [MB] (11 MBps) [2024-11-17T01:41:04.156Z] Copying: 982/1024 [MB] (10 MBps) [2024-11-17T01:41:05.100Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-17T01:41:06.044Z] Copying: 1007/1024 [MB] (10 MBps) [2024-11-17T01:41:06.987Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-17T01:41:07.560Z] Copying: 1048156/1048576 [kB] (5400 kBps) [2024-11-17T01:41:07.560Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-17 01:41:07.283294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.283366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.101 [2024-11-17 01:41:07.283384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.101 [2024-11-17 01:41:07.283404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.283431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.101 [2024-11-17 01:41:07.286528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.286566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.101 [2024-11-17 01:41:07.286578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:20:59.101 [2024-11-17 01:41:07.286589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.298886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.298937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.101 [2024-11-17 01:41:07.298950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.524 ms 00:20:59.101 [2024-11-17 01:41:07.298958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.322982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.323038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.101 [2024-11-17 01:41:07.323052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.996 ms 00:20:59.101 [2024-11-17 01:41:07.323061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.329193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.329399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.101 [2024-11-17 01:41:07.329422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:20:59.101 [2024-11-17 01:41:07.329432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.356926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.356976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.101 [2024-11-17 01:41:07.356989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.434 ms 00:20:59.101 [2024-11-17 01:41:07.356997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.101 [2024-11-17 01:41:07.373314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.101 [2024-11-17 01:41:07.373371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.101 [2024-11-17 01:41:07.373384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.266 ms 00:20:59.101 [2024-11-17 01:41:07.373393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.667431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.363 [2024-11-17 01:41:07.667527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.363 [2024-11-17 01:41:07.667543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 294.004 ms 00:20:59.363 [2024-11-17 01:41:07.667552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.694270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.363 [2024-11-17 01:41:07.694321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:59.363 [2024-11-17 01:41:07.694334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.701 ms 00:20:59.363 [2024-11-17 01:41:07.694343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.720453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.363 [2024-11-17 01:41:07.720515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:59.363 [2024-11-17 01:41:07.720527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.060 ms 00:20:59.363 [2024-11-17 01:41:07.720535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.747468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.363 [2024-11-17 01:41:07.747524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.363 [2024-11-17 01:41:07.747537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.624 ms 00:20:59.363 [2024-11-17 01:41:07.747545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.772664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.363 [2024-11-17 01:41:07.772713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.363 [2024-11-17 01:41:07.772726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.029 ms 00:20:59.363 [2024-11-17 01:41:07.772733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.363 [2024-11-17 01:41:07.772781] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.364 [2024-11-17 01:41:07.772825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104192 / 261120 wr_cnt: 1 state: open 00:20:59.364 [2024-11-17 01:41:07.772837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.772998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.364 [2024-11-17 01:41:07.773567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.365 [2024-11-17 01:41:07.773669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.365 [2024-11-17 01:41:07.773678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aec450e1-c0d4-4563-8d98-0437163278cc 00:20:59.365 [2024-11-17 01:41:07.773686] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104192 00:20:59.365 [2024-11-17 01:41:07.773694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105152 00:20:59.365 [2024-11-17 01:41:07.773701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104192 00:20:59.365 [2024-11-17 01:41:07.773710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:20:59.365 [2024-11-17 01:41:07.773716] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.365 [2024-11-17 01:41:07.773730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.365 [2024-11-17 01:41:07.773746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.365 [2024-11-17 01:41:07.773753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.365 [2024-11-17 01:41:07.773759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.365 [2024-11-17 01:41:07.773767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.365 [2024-11-17 01:41:07.773775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.365 [2024-11-17 01:41:07.773783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:20:59.365 [2024-11-17 01:41:07.773801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.365 [2024-11-17 01:41:07.787620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.365 [2024-11-17 01:41:07.787664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.365 [2024-11-17 01:41:07.787677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.798 ms 00:20:59.365 [2024-11-17 01:41:07.787693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.365 [2024-11-17 01:41:07.788140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.365 [2024-11-17 01:41:07.788151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.365 [2024-11-17 01:41:07.788160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:59.365 [2024-11-17 01:41:07.788168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.626 [2024-11-17 01:41:07.825005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.626 [2024-11-17 01:41:07.825057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.626 [2024-11-17 01:41:07.825075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.626 [2024-11-17 01:41:07.825083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.626 [2024-11-17 01:41:07.825153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.626 [2024-11-17 01:41:07.825162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.626 [2024-11-17 01:41:07.825170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.626 [2024-11-17 01:41:07.825178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.626 [2024-11-17 01:41:07.825244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.626 [2024-11-17 01:41:07.825255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.626 [2024-11-17 01:41:07.825264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.626 [2024-11-17 01:41:07.825276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.626 [2024-11-17 01:41:07.825292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.626 [2024-11-17 01:41:07.825300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.626 [2024-11-17 01:41:07.825309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.626 [2024-11-17 01:41:07.825317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.626 [2024-11-17 01:41:07.909080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.626 [2024-11-17 01:41:07.909136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.626 [2024-11-17 01:41:07.909157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.909166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.978564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.978622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.627 [2024-11-17 01:41:07.978634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.978643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.978728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.978738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.627 [2024-11-17 01:41:07.978748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.978756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.978829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.978840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.627 [2024-11-17 01:41:07.978849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.978858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.978988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.978999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.627 [2024-11-17 01:41:07.979009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.979017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.979054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.979064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.627 [2024-11-17 01:41:07.979072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.979080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.979121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.979131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.627 [2024-11-17 01:41:07.979141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.979148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.979200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.627 [2024-11-17 01:41:07.979211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.627 [2024-11-17 01:41:07.979220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.627 [2024-11-17 01:41:07.979228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.627 [2024-11-17 01:41:07.979366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 696.039 ms, result 0 00:21:01.010 00:21:01.010 00:21:01.010 01:41:09 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:01.271 [2024-11-17 01:41:09.499194] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:01.271 [2024-11-17 01:41:09.499365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76836 ] 00:21:01.271 [2024-11-17 01:41:09.666671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.532 [2024-11-17 01:41:09.787888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.793 [2024-11-17 01:41:10.083619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.793 [2024-11-17 01:41:10.083707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.793 [2024-11-17 01:41:10.245762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.793 [2024-11-17 01:41:10.245846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.793 [2024-11-17 01:41:10.245869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.793 [2024-11-17 01:41:10.245878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.793 [2024-11-17 01:41:10.245937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.793 [2024-11-17 01:41:10.245949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.793 [2024-11-17 01:41:10.245961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:01.793 [2024-11-17 01:41:10.245969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.793 [2024-11-17 01:41:10.245989] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.793 [2024-11-17 01:41:10.246756] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.793 [2024-11-17 01:41:10.246775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.793 [2024-11-17 01:41:10.246783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.793 [2024-11-17 01:41:10.246822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:21:01.793 [2024-11-17 01:41:10.246831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.793 [2024-11-17 01:41:10.248688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.055 [2024-11-17 01:41:10.263101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.055 [2024-11-17 01:41:10.263157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.055 [2024-11-17 01:41:10.263171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.415 ms 00:21:02.055 [2024-11-17 01:41:10.263180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.055 [2024-11-17 01:41:10.263268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.055 [2024-11-17 01:41:10.263278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.055 [2024-11-17 01:41:10.263288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:02.055 [2024-11-17 01:41:10.263296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.055 [2024-11-17 01:41:10.271759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.055 [2024-11-17 01:41:10.271828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.055 [2024-11-17 01:41:10.271839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.380 ms 00:21:02.056 [2024-11-17 01:41:10.271848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.271935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.271944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.056 [2024-11-17 01:41:10.271953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:02.056 [2024-11-17 01:41:10.271960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.272007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.272018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.056 [2024-11-17 01:41:10.272027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:02.056 [2024-11-17 01:41:10.272035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.272059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.056 [2024-11-17 01:41:10.276089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.276133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.056 [2024-11-17 01:41:10.276145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:21:02.056 [2024-11-17 01:41:10.276156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.276191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.276200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.056 [2024-11-17 01:41:10.276210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:02.056 [2024-11-17 01:41:10.276218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.276273] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.056 [2024-11-17 01:41:10.276296] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.056 [2024-11-17 01:41:10.276335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.056 [2024-11-17 01:41:10.276355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.056 [2024-11-17 01:41:10.276463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.056 [2024-11-17 01:41:10.276475] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.056 [2024-11-17 01:41:10.276487] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.056 [2024-11-17 01:41:10.276497] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.056 [2024-11-17 01:41:10.276507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.056 [2024-11-17 01:41:10.276515] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:02.056 [2024-11-17 01:41:10.276523] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.056 [2024-11-17 01:41:10.276531] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.056 [2024-11-17 01:41:10.276538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.056 [2024-11-17 01:41:10.276548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.276556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.056 [2024-11-17 01:41:10.276565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:02.056 [2024-11-17 01:41:10.276572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.276655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.056 [2024-11-17 01:41:10.276664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.056 [2024-11-17 01:41:10.276672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:02.056 [2024-11-17 01:41:10.276679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.056 [2024-11-17 01:41:10.276784] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.056 [2024-11-17 01:41:10.276827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.056 [2024-11-17 01:41:10.276836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.056 [2024-11-17 01:41:10.276846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.276856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.056 [2024-11-17 01:41:10.276863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.276871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:02.056 [2024-11-17 01:41:10.276879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.056 [2024-11-17 01:41:10.276886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:02.056 [2024-11-17 01:41:10.276893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.056 [2024-11-17 01:41:10.276919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.056 [2024-11-17 01:41:10.276928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:02.056 [2024-11-17 01:41:10.276935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.056 [2024-11-17 01:41:10.276943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.056 [2024-11-17 01:41:10.276952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:02.056 [2024-11-17 01:41:10.276965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.276973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.056 [2024-11-17 01:41:10.276980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:02.056 [2024-11-17 01:41:10.276986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.276994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.056 [2024-11-17 01:41:10.277002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.056 [2024-11-17 01:41:10.277015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.056 [2024-11-17 01:41:10.277022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.056 [2024-11-17 01:41:10.277036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.056 [2024-11-17 01:41:10.277043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.056 [2024-11-17 01:41:10.277057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.056 [2024-11-17 01:41:10.277064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.056 [2024-11-17 01:41:10.277079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.056 [2024-11-17 01:41:10.277085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.056 [2024-11-17 01:41:10.277097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.056 [2024-11-17 01:41:10.277105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:02.056 [2024-11-17 01:41:10.277111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.056 [2024-11-17 01:41:10.277118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.056 [2024-11-17 01:41:10.277124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:02.056 [2024-11-17 01:41:10.277131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.056 [2024-11-17 01:41:10.277145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:02.056 [2024-11-17 01:41:10.277151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.056 [2024-11-17 01:41:10.277167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.056 [2024-11-17 01:41:10.277174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.056 [2024-11-17 01:41:10.277182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.056 [2024-11-17 01:41:10.277189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.056 [2024-11-17 01:41:10.277196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.056 [2024-11-17 01:41:10.277202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.056 [2024-11-17 01:41:10.277209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.056 [2024-11-17 01:41:10.277216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.056 [2024-11-17 01:41:10.277223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.057 [2024-11-17 01:41:10.277231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.057 [2024-11-17 01:41:10.277240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:02.057 [2024-11-17 01:41:10.277256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:02.057 [2024-11-17 01:41:10.277263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:02.057 [2024-11-17 01:41:10.277270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:02.057 [2024-11-17 01:41:10.277278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:02.057 [2024-11-17 01:41:10.277286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:02.057 [2024-11-17 01:41:10.277292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:02.057 [2024-11-17 01:41:10.277300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:02.057 [2024-11-17 01:41:10.277306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:02.057 [2024-11-17 01:41:10.277313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:02.057 [2024-11-17 01:41:10.277350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.057 [2024-11-17 01:41:10.277362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.057 [2024-11-17 01:41:10.277379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.057 [2024-11-17 01:41:10.277386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.057 [2024-11-17 01:41:10.277394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.057 [2024-11-17 01:41:10.277402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.277411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.057 [2024-11-17 01:41:10.277419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:21:02.057 [2024-11-17 01:41:10.277427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.309846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.309895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.057 [2024-11-17 01:41:10.309908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.374 ms 00:21:02.057 [2024-11-17 01:41:10.309918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.310014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.310024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.057 [2024-11-17 01:41:10.310033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:02.057 [2024-11-17 01:41:10.310041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.353471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.353530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.057 [2024-11-17 01:41:10.353544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.369 ms 00:21:02.057 [2024-11-17 01:41:10.353553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.353603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.353614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.057 [2024-11-17 01:41:10.353623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.057 [2024-11-17 01:41:10.353635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.354298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.354333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.057 [2024-11-17 01:41:10.354344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:21:02.057 [2024-11-17 01:41:10.354352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.354510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.354530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.057 [2024-11-17 01:41:10.354539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:02.057 [2024-11-17 01:41:10.354553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.370495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.370546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.057 [2024-11-17 01:41:10.370561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.921 ms 00:21:02.057 [2024-11-17 01:41:10.370570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.385203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:02.057 [2024-11-17 01:41:10.385257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.057 [2024-11-17 01:41:10.385272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.385282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.057 [2024-11-17 01:41:10.385292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.591 ms 00:21:02.057 [2024-11-17 01:41:10.385299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.411680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.411743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.057 [2024-11-17 01:41:10.411756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.322 ms 00:21:02.057 [2024-11-17 01:41:10.411764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.425232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.425447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.057 [2024-11-17 01:41:10.425470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.391 ms 00:21:02.057 [2024-11-17 01:41:10.425478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.438641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.438692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.057 [2024-11-17 01:41:10.438706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.122 ms 00:21:02.057 [2024-11-17 01:41:10.438713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.439402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.439438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.057 [2024-11-17 01:41:10.439449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:21:02.057 [2024-11-17 01:41:10.439461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.057 [2024-11-17 01:41:10.506596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.057 [2024-11-17 01:41:10.506665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.057 [2024-11-17 01:41:10.506691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.115 ms 00:21:02.057 [2024-11-17 01:41:10.506700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.519629] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.319 [2024-11-17 01:41:10.522881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.522927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.319 [2024-11-17 01:41:10.522940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.118 ms 00:21:02.319 [2024-11-17 01:41:10.522950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.523041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.523052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.319 [2024-11-17 01:41:10.523062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:02.319 [2024-11-17 01:41:10.523075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.524955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.525004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.319 [2024-11-17 01:41:10.525016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:21:02.319 [2024-11-17 01:41:10.525025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.525057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.525066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.319 [2024-11-17 01:41:10.525075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.319 [2024-11-17 01:41:10.525083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.525128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.319 [2024-11-17 01:41:10.525142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.525151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.319 [2024-11-17 01:41:10.525160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:02.319 [2024-11-17 01:41:10.525167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.319 [2024-11-17 01:41:10.552010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.319 [2024-11-17 01:41:10.552063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.319 [2024-11-17 01:41:10.552077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.822 ms 00:21:02.319 [2024-11-17 01:41:10.552092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.320 [2024-11-17 01:41:10.552192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.320 [2024-11-17 01:41:10.552203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.320 [2024-11-17 01:41:10.552213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:02.320 [2024-11-17 01:41:10.552222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.320 [2024-11-17 01:41:10.553498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.244 ms, result 0 00:21:03.705  [2024-11-17T01:41:13.108Z] Copying: 8520/1048576 [kB] (8520 kBps) [2024-11-17T01:41:14.052Z] Copying: 19/1024 [MB] (10 MBps) [2024-11-17T01:41:14.996Z] Copying: 33/1024 [MB] (13 MBps) [2024-11-17T01:41:15.940Z] Copying: 53/1024 [MB] (20 MBps) [2024-11-17T01:41:16.882Z] Copying: 68/1024 [MB] (15 MBps) [2024-11-17T01:41:17.826Z] Copying: 82/1024 [MB] (14 MBps) [2024-11-17T01:41:18.771Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-17T01:41:20.159Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-17T01:41:21.102Z] Copying: 122/1024 [MB] (10 MBps) [2024-11-17T01:41:22.114Z] Copying: 145/1024 [MB] (23 MBps) [2024-11-17T01:41:23.092Z] Copying: 170/1024 [MB] (24 MBps) [2024-11-17T01:41:24.037Z] Copying: 181/1024 [MB] (11 MBps) [2024-11-17T01:41:24.982Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-17T01:41:25.928Z] Copying: 211/1024 [MB] (18 MBps) [2024-11-17T01:41:26.872Z] Copying: 225/1024 [MB] (14 MBps) [2024-11-17T01:41:27.816Z] Copying: 245/1024 [MB] (19 MBps) [2024-11-17T01:41:28.760Z] Copying: 259/1024 [MB] (14 MBps) [2024-11-17T01:41:30.145Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-17T01:41:31.089Z] Copying: 287/1024 [MB] (15 MBps) [2024-11-17T01:41:32.033Z] Copying: 304/1024 [MB] (17 MBps) [2024-11-17T01:41:32.977Z] Copying: 327/1024 [MB] (22 MBps) [2024-11-17T01:41:33.921Z] Copying: 345/1024 [MB] (18 MBps) [2024-11-17T01:41:34.865Z] Copying: 364/1024 [MB] (19 MBps) [2024-11-17T01:41:35.809Z] Copying: 390/1024 [MB] (25 MBps) [2024-11-17T01:41:36.752Z] Copying: 414/1024 [MB] (24 MBps) [2024-11-17T01:41:38.135Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-17T01:41:39.078Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-17T01:41:40.026Z] Copying: 445/1024 [MB] (10 MBps) [2024-11-17T01:41:40.971Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-17T01:41:41.915Z] Copying: 470/1024 [MB] (14 MBps) [2024-11-17T01:41:42.862Z] Copying: 483/1024 [MB] (12 MBps) [2024-11-17T01:41:43.806Z] Copying: 496/1024 [MB] (13 MBps) [2024-11-17T01:41:44.750Z] Copying: 507/1024 [MB] (10 MBps) [2024-11-17T01:41:46.136Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-17T01:41:47.079Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-17T01:41:48.022Z] Copying: 546/1024 [MB] (16 MBps) [2024-11-17T01:41:48.965Z] Copying: 560/1024 [MB] (13 MBps) [2024-11-17T01:41:49.908Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-17T01:41:50.851Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-17T01:41:51.795Z] Copying: 597/1024 [MB] (15 MBps) [2024-11-17T01:41:53.183Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-17T01:41:53.755Z] Copying: 621/1024 [MB] (13 MBps) [2024-11-17T01:41:55.143Z] Copying: 638/1024 [MB] (17 MBps) [2024-11-17T01:41:56.087Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-17T01:41:57.088Z] Copying: 661/1024 [MB] (12 MBps) [2024-11-17T01:41:58.051Z] Copying: 678/1024 [MB] (17 MBps) [2024-11-17T01:41:58.997Z] Copying: 696/1024 [MB] (17 MBps) [2024-11-17T01:41:59.941Z] Copying: 713/1024 [MB] (16 MBps) [2024-11-17T01:42:00.887Z] Copying: 734/1024 [MB] (21 MBps) [2024-11-17T01:42:01.832Z] Copying: 749/1024 [MB] (15 MBps) [2024-11-17T01:42:02.777Z] Copying: 776/1024 [MB] (27 MBps) [2024-11-17T01:42:04.163Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-17T01:42:05.108Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-17T01:42:06.052Z] Copying: 811/1024 [MB] (11 MBps) [2024-11-17T01:42:06.991Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-17T01:42:07.936Z] Copying: 839/1024 [MB] (17 MBps) [2024-11-17T01:42:08.880Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-17T01:42:09.823Z] Copying: 862/1024 [MB] (12 MBps) [2024-11-17T01:42:10.766Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-17T01:42:12.153Z] Copying: 885/1024 [MB] (12 MBps) [2024-11-17T01:42:13.100Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-17T01:42:14.045Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-17T01:42:14.992Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-17T01:42:15.937Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-17T01:42:16.881Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-17T01:42:17.826Z] Copying: 962/1024 [MB] (17 MBps) [2024-11-17T01:42:18.770Z] Copying: 978/1024 [MB] (15 MBps) [2024-11-17T01:42:20.162Z] Copying: 996/1024 [MB] (18 MBps) [2024-11-17T01:42:20.424Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-17T01:42:20.424Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 01:42:20.377292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.965 [2024-11-17 01:42:20.377565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.965 [2024-11-17 01:42:20.377661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.965 [2024-11-17 01:42:20.377690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.965 [2024-11-17 01:42:20.377758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.965 [2024-11-17 01:42:20.381929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.965 [2024-11-17 01:42:20.382110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.965 [2024-11-17 01:42:20.382189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.033 ms 00:22:11.965 [2024-11-17 01:42:20.382216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.965 [2024-11-17 01:42:20.382515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.965 [2024-11-17 01:42:20.382592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.965 [2024-11-17 01:42:20.382622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:11.965 [2024-11-17 01:42:20.382646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.965 [2024-11-17 01:42:20.389712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.965 [2024-11-17 01:42:20.389908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.965 [2024-11-17 01:42:20.389932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.979 ms 00:22:11.965 [2024-11-17 01:42:20.389943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.965 [2024-11-17 01:42:20.396602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.965 [2024-11-17 01:42:20.396754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.965 [2024-11-17 01:42:20.396833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.611 ms 00:22:11.965 [2024-11-17 01:42:20.396858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.226 [2024-11-17 01:42:20.424342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.226 [2024-11-17 01:42:20.424523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.226 [2024-11-17 01:42:20.424707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.412 ms 00:22:12.226 [2024-11-17 01:42:20.424748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.226 [2024-11-17 01:42:20.440433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.226 [2024-11-17 01:42:20.440595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.226 [2024-11-17 01:42:20.440654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.610 ms 00:22:12.226 [2024-11-17 01:42:20.440679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.226 [2024-11-17 01:42:20.655199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.226 [2024-11-17 01:42:20.655370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.226 [2024-11-17 01:42:20.655430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.359 ms 00:22:12.226 [2024-11-17 01:42:20.655477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.226 [2024-11-17 01:42:20.681827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.226 [2024-11-17 01:42:20.681998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:12.226 [2024-11-17 01:42:20.682056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.314 ms 00:22:12.226 [2024-11-17 01:42:20.682080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.489 [2024-11-17 01:42:20.707572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.489 [2024-11-17 01:42:20.707740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:12.489 [2024-11-17 01:42:20.707849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.369 ms 00:22:12.489 [2024-11-17 01:42:20.707874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.489 [2024-11-17 01:42:20.732695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.489 [2024-11-17 01:42:20.732880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:12.489 [2024-11-17 01:42:20.732941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.706 ms 00:22:12.489 [2024-11-17 01:42:20.732964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.489 [2024-11-17 01:42:20.758028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.489 [2024-11-17 01:42:20.758203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:12.489 [2024-11-17 01:42:20.758263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.901 ms 00:22:12.489 [2024-11-17 01:42:20.758285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.489 [2024-11-17 01:42:20.758365] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:12.489 [2024-11-17 01:42:20.758397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:12.489 [2024-11-17 01:42:20.758429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.758998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:12.489 [2024-11-17 01:42:20.759094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:12.490 [2024-11-17 01:42:20.759388] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:12.490 [2024-11-17 01:42:20.759396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aec450e1-c0d4-4563-8d98-0437163278cc 00:22:12.490 [2024-11-17 01:42:20.759405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:12.490 [2024-11-17 01:42:20.759413] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27840 00:22:12.490 [2024-11-17 01:42:20.759421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26880 00:22:12.490 [2024-11-17 01:42:20.759430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0357 00:22:12.490 [2024-11-17 01:42:20.759438] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:12.490 [2024-11-17 01:42:20.759462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:12.490 [2024-11-17 01:42:20.759470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:12.490 [2024-11-17 01:42:20.759484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:12.490 [2024-11-17 01:42:20.759491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:12.490 [2024-11-17 01:42:20.759500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.490 [2024-11-17 01:42:20.759509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:12.490 [2024-11-17 01:42:20.759518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:22:12.490 [2024-11-17 01:42:20.759526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.773246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.490 [2024-11-17 01:42:20.773417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:12.490 [2024-11-17 01:42:20.773435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.678 ms 00:22:12.490 [2024-11-17 01:42:20.773450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.773854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.490 [2024-11-17 01:42:20.773867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:12.490 [2024-11-17 01:42:20.773877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:22:12.490 [2024-11-17 01:42:20.773885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.810425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.490 [2024-11-17 01:42:20.810476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.490 [2024-11-17 01:42:20.810493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.490 [2024-11-17 01:42:20.810503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.810567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.490 [2024-11-17 01:42:20.810578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.490 [2024-11-17 01:42:20.810588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.490 [2024-11-17 01:42:20.810597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.810678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.490 [2024-11-17 01:42:20.810690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.490 [2024-11-17 01:42:20.810700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.490 [2024-11-17 01:42:20.810714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.810731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.490 [2024-11-17 01:42:20.810740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.490 [2024-11-17 01:42:20.810749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.490 [2024-11-17 01:42:20.810757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.490 [2024-11-17 01:42:20.894119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.490 [2024-11-17 01:42:20.894178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.490 [2024-11-17 01:42:20.894199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.490 [2024-11-17 01:42:20.894209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.962550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.962607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.752 [2024-11-17 01:42:20.962621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.962629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.962688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.962698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.752 [2024-11-17 01:42:20.962707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.962716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.962783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.962824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.752 [2024-11-17 01:42:20.962834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.962842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.962946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.962959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.752 [2024-11-17 01:42:20.962968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.962976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.963012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.963022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.752 [2024-11-17 01:42:20.963032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.963040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.963081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.963091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.752 [2024-11-17 01:42:20.963099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.963109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.963157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.752 [2024-11-17 01:42:20.963168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.752 [2024-11-17 01:42:20.963177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.752 [2024-11-17 01:42:20.963185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.752 [2024-11-17 01:42:20.963322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 585.993 ms, result 0 00:22:13.323 00:22:13.323 00:22:13.323 01:42:21 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:15.871 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:15.871 01:42:23 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:15.871 01:42:23 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:15.872 01:42:23 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74536 00:22:15.872 01:42:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74536 ']' 00:22:15.872 01:42:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74536 00:22:15.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74536) - No such process 00:22:15.872 Process with pid 74536 is not found 00:22:15.872 01:42:24 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74536 is not found' 00:22:15.872 Remove shared memory files 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:15.872 01:42:24 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:15.872 ************************************ 00:22:15.872 END TEST ftl_restore 00:22:15.872 ************************************ 00:22:15.872 00:22:15.872 real 4m54.012s 00:22:15.872 user 4m41.934s 00:22:15.872 sys 0m12.087s 00:22:15.872 01:42:24 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.872 01:42:24 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:15.872 01:42:24 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:15.872 01:42:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:15.872 01:42:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.872 01:42:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:15.872 ************************************ 00:22:15.872 START TEST ftl_dirty_shutdown 00:22:15.872 ************************************ 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:15.872 * Looking for test storage... 00:22:15.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.872 --rc genhtml_branch_coverage=1 00:22:15.872 --rc genhtml_function_coverage=1 00:22:15.872 --rc genhtml_legend=1 00:22:15.872 --rc geninfo_all_blocks=1 00:22:15.872 --rc geninfo_unexecuted_blocks=1 00:22:15.872 00:22:15.872 ' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.872 --rc genhtml_branch_coverage=1 00:22:15.872 --rc genhtml_function_coverage=1 00:22:15.872 --rc genhtml_legend=1 00:22:15.872 --rc geninfo_all_blocks=1 00:22:15.872 --rc geninfo_unexecuted_blocks=1 00:22:15.872 00:22:15.872 ' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.872 --rc genhtml_branch_coverage=1 00:22:15.872 --rc genhtml_function_coverage=1 00:22:15.872 --rc genhtml_legend=1 00:22:15.872 --rc geninfo_all_blocks=1 00:22:15.872 --rc geninfo_unexecuted_blocks=1 00:22:15.872 00:22:15.872 ' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.872 --rc genhtml_branch_coverage=1 00:22:15.872 --rc genhtml_function_coverage=1 00:22:15.872 --rc genhtml_legend=1 00:22:15.872 --rc geninfo_all_blocks=1 00:22:15.872 --rc geninfo_unexecuted_blocks=1 00:22:15.872 00:22:15.872 ' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:15.872 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77667 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77667 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77667 ']' 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.873 01:42:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.134 [2024-11-17 01:42:24.373660] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:16.134 [2024-11-17 01:42:24.373996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77667 ] 00:22:16.134 [2024-11-17 01:42:24.537264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.395 [2024-11-17 01:42:24.635230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.967 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:16.968 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:17.228 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:17.229 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:17.490 { 00:22:17.490 "name": "nvme0n1", 00:22:17.490 "aliases": [ 00:22:17.490 "6cf809cb-f1ab-4d7f-b409-e5cc3de34c07" 00:22:17.490 ], 00:22:17.490 "product_name": "NVMe disk", 00:22:17.490 "block_size": 4096, 00:22:17.490 "num_blocks": 1310720, 00:22:17.490 "uuid": "6cf809cb-f1ab-4d7f-b409-e5cc3de34c07", 00:22:17.490 "numa_id": -1, 00:22:17.490 "assigned_rate_limits": { 00:22:17.490 "rw_ios_per_sec": 0, 00:22:17.490 "rw_mbytes_per_sec": 0, 00:22:17.490 "r_mbytes_per_sec": 0, 00:22:17.490 "w_mbytes_per_sec": 0 00:22:17.490 }, 00:22:17.490 "claimed": true, 00:22:17.490 "claim_type": "read_many_write_one", 00:22:17.490 "zoned": false, 00:22:17.490 "supported_io_types": { 00:22:17.490 "read": true, 00:22:17.490 "write": true, 00:22:17.490 "unmap": true, 00:22:17.490 "flush": true, 00:22:17.490 "reset": true, 00:22:17.490 "nvme_admin": true, 00:22:17.490 "nvme_io": true, 00:22:17.490 "nvme_io_md": false, 00:22:17.490 "write_zeroes": true, 00:22:17.490 "zcopy": false, 00:22:17.490 "get_zone_info": false, 00:22:17.490 "zone_management": false, 00:22:17.490 "zone_append": false, 00:22:17.490 "compare": true, 00:22:17.490 "compare_and_write": false, 00:22:17.490 "abort": true, 00:22:17.490 "seek_hole": false, 00:22:17.490 "seek_data": false, 00:22:17.490 "copy": true, 00:22:17.490 "nvme_iov_md": false 00:22:17.490 }, 00:22:17.490 "driver_specific": { 00:22:17.490 "nvme": [ 00:22:17.490 { 00:22:17.490 "pci_address": "0000:00:11.0", 00:22:17.490 "trid": { 00:22:17.490 "trtype": "PCIe", 00:22:17.490 "traddr": "0000:00:11.0" 00:22:17.490 }, 00:22:17.490 "ctrlr_data": { 00:22:17.490 "cntlid": 0, 00:22:17.490 "vendor_id": "0x1b36", 00:22:17.490 "model_number": "QEMU NVMe Ctrl", 00:22:17.490 "serial_number": "12341", 00:22:17.490 "firmware_revision": "8.0.0", 00:22:17.490 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:17.490 "oacs": { 00:22:17.490 "security": 0, 00:22:17.490 "format": 1, 00:22:17.490 "firmware": 0, 00:22:17.490 "ns_manage": 1 00:22:17.490 }, 00:22:17.490 "multi_ctrlr": false, 00:22:17.490 "ana_reporting": false 00:22:17.490 }, 00:22:17.490 "vs": { 00:22:17.490 "nvme_version": "1.4" 00:22:17.490 }, 00:22:17.490 "ns_data": { 00:22:17.490 "id": 1, 00:22:17.490 "can_share": false 00:22:17.490 } 00:22:17.490 } 00:22:17.490 ], 00:22:17.490 "mp_policy": "active_passive" 00:22:17.490 } 00:22:17.490 } 00:22:17.490 ]' 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:17.490 01:42:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:17.752 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b5e8cb36-5f76-40fb-895c-ae2223e5d461 00:22:17.752 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:17.752 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5e8cb36-5f76-40fb-895c-ae2223e5d461 00:22:18.013 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:18.274 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=eec61e46-5195-4162-8784-14556badbf02 00:22:18.274 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eec61e46-5195-4162-8784-14556badbf02 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:18.535 { 00:22:18.535 "name": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:18.535 "aliases": [ 00:22:18.535 "lvs/nvme0n1p0" 00:22:18.535 ], 00:22:18.535 "product_name": "Logical Volume", 00:22:18.535 "block_size": 4096, 00:22:18.535 "num_blocks": 26476544, 00:22:18.535 "uuid": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:18.535 "assigned_rate_limits": { 00:22:18.535 "rw_ios_per_sec": 0, 00:22:18.535 "rw_mbytes_per_sec": 0, 00:22:18.535 "r_mbytes_per_sec": 0, 00:22:18.535 "w_mbytes_per_sec": 0 00:22:18.535 }, 00:22:18.535 "claimed": false, 00:22:18.535 "zoned": false, 00:22:18.535 "supported_io_types": { 00:22:18.535 "read": true, 00:22:18.535 "write": true, 00:22:18.535 "unmap": true, 00:22:18.535 "flush": false, 00:22:18.535 "reset": true, 00:22:18.535 "nvme_admin": false, 00:22:18.535 "nvme_io": false, 00:22:18.535 "nvme_io_md": false, 00:22:18.535 "write_zeroes": true, 00:22:18.535 "zcopy": false, 00:22:18.535 "get_zone_info": false, 00:22:18.535 "zone_management": false, 00:22:18.535 "zone_append": false, 00:22:18.535 "compare": false, 00:22:18.535 "compare_and_write": false, 00:22:18.535 "abort": false, 00:22:18.535 "seek_hole": true, 00:22:18.535 "seek_data": true, 00:22:18.535 "copy": false, 00:22:18.535 "nvme_iov_md": false 00:22:18.535 }, 00:22:18.535 "driver_specific": { 00:22:18.535 "lvol": { 00:22:18.535 "lvol_store_uuid": "eec61e46-5195-4162-8784-14556badbf02", 00:22:18.535 "base_bdev": "nvme0n1", 00:22:18.535 "thin_provision": true, 00:22:18.535 "num_allocated_clusters": 0, 00:22:18.535 "snapshot": false, 00:22:18.535 "clone": false, 00:22:18.535 "esnap_clone": false 00:22:18.535 } 00:22:18.535 } 00:22:18.535 } 00:22:18.535 ]' 00:22:18.535 01:42:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:18.796 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:19.057 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:19.318 { 00:22:19.318 "name": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:19.318 "aliases": [ 00:22:19.318 "lvs/nvme0n1p0" 00:22:19.318 ], 00:22:19.318 "product_name": "Logical Volume", 00:22:19.318 "block_size": 4096, 00:22:19.318 "num_blocks": 26476544, 00:22:19.318 "uuid": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:19.318 "assigned_rate_limits": { 00:22:19.318 "rw_ios_per_sec": 0, 00:22:19.318 "rw_mbytes_per_sec": 0, 00:22:19.318 "r_mbytes_per_sec": 0, 00:22:19.318 "w_mbytes_per_sec": 0 00:22:19.318 }, 00:22:19.318 "claimed": false, 00:22:19.318 "zoned": false, 00:22:19.318 "supported_io_types": { 00:22:19.318 "read": true, 00:22:19.318 "write": true, 00:22:19.318 "unmap": true, 00:22:19.318 "flush": false, 00:22:19.318 "reset": true, 00:22:19.318 "nvme_admin": false, 00:22:19.318 "nvme_io": false, 00:22:19.318 "nvme_io_md": false, 00:22:19.318 "write_zeroes": true, 00:22:19.318 "zcopy": false, 00:22:19.318 "get_zone_info": false, 00:22:19.318 "zone_management": false, 00:22:19.318 "zone_append": false, 00:22:19.318 "compare": false, 00:22:19.318 "compare_and_write": false, 00:22:19.318 "abort": false, 00:22:19.318 "seek_hole": true, 00:22:19.318 "seek_data": true, 00:22:19.318 "copy": false, 00:22:19.318 "nvme_iov_md": false 00:22:19.318 }, 00:22:19.318 "driver_specific": { 00:22:19.318 "lvol": { 00:22:19.318 "lvol_store_uuid": "eec61e46-5195-4162-8784-14556badbf02", 00:22:19.318 "base_bdev": "nvme0n1", 00:22:19.318 "thin_provision": true, 00:22:19.318 "num_allocated_clusters": 0, 00:22:19.318 "snapshot": false, 00:22:19.318 "clone": false, 00:22:19.318 "esnap_clone": false 00:22:19.318 } 00:22:19.318 } 00:22:19.318 } 00:22:19.318 ]' 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:19.318 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:19.579 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bda1df2-10ac-485f-be1a-6caff4e07c50 00:22:19.580 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:19.580 { 00:22:19.580 "name": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:19.580 "aliases": [ 00:22:19.580 "lvs/nvme0n1p0" 00:22:19.580 ], 00:22:19.580 "product_name": "Logical Volume", 00:22:19.580 "block_size": 4096, 00:22:19.580 "num_blocks": 26476544, 00:22:19.580 "uuid": "1bda1df2-10ac-485f-be1a-6caff4e07c50", 00:22:19.580 "assigned_rate_limits": { 00:22:19.580 "rw_ios_per_sec": 0, 00:22:19.580 "rw_mbytes_per_sec": 0, 00:22:19.580 "r_mbytes_per_sec": 0, 00:22:19.580 "w_mbytes_per_sec": 0 00:22:19.580 }, 00:22:19.580 "claimed": false, 00:22:19.580 "zoned": false, 00:22:19.580 "supported_io_types": { 00:22:19.580 "read": true, 00:22:19.580 "write": true, 00:22:19.580 "unmap": true, 00:22:19.580 "flush": false, 00:22:19.580 "reset": true, 00:22:19.580 "nvme_admin": false, 00:22:19.580 "nvme_io": false, 00:22:19.580 "nvme_io_md": false, 00:22:19.580 "write_zeroes": true, 00:22:19.580 "zcopy": false, 00:22:19.580 "get_zone_info": false, 00:22:19.580 "zone_management": false, 00:22:19.580 "zone_append": false, 00:22:19.580 "compare": false, 00:22:19.580 "compare_and_write": false, 00:22:19.580 "abort": false, 00:22:19.580 "seek_hole": true, 00:22:19.580 "seek_data": true, 00:22:19.580 "copy": false, 00:22:19.580 "nvme_iov_md": false 00:22:19.580 }, 00:22:19.580 "driver_specific": { 00:22:19.580 "lvol": { 00:22:19.580 "lvol_store_uuid": "eec61e46-5195-4162-8784-14556badbf02", 00:22:19.580 "base_bdev": "nvme0n1", 00:22:19.580 "thin_provision": true, 00:22:19.580 "num_allocated_clusters": 0, 00:22:19.580 "snapshot": false, 00:22:19.580 "clone": false, 00:22:19.580 "esnap_clone": false 00:22:19.580 } 00:22:19.580 } 00:22:19.580 } 00:22:19.580 ]' 00:22:19.580 01:42:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:19.580 01:42:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:19.580 01:42:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1bda1df2-10ac-485f-be1a-6caff4e07c50 --l2p_dram_limit 10' 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:19.861 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:19.862 01:42:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1bda1df2-10ac-485f-be1a-6caff4e07c50 --l2p_dram_limit 10 -c nvc0n1p0 00:22:19.862 [2024-11-17 01:42:28.226205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.226243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:19.862 [2024-11-17 01:42:28.226255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:19.862 [2024-11-17 01:42:28.226262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.226305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.226313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.862 [2024-11-17 01:42:28.226321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:19.862 [2024-11-17 01:42:28.226326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.226346] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:19.862 [2024-11-17 01:42:28.226960] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:19.862 [2024-11-17 01:42:28.226977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.226983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.862 [2024-11-17 01:42:28.226991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:22:19.862 [2024-11-17 01:42:28.226996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.227023] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8762e556-2333-475f-8181-d698776a93fd 00:22:19.862 [2024-11-17 01:42:28.228017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.228041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:19.862 [2024-11-17 01:42:28.228049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:19.862 [2024-11-17 01:42:28.228057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.232755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.232784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.862 [2024-11-17 01:42:28.232802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:22:19.862 [2024-11-17 01:42:28.232809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.232876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.232885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.862 [2024-11-17 01:42:28.232891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:19.862 [2024-11-17 01:42:28.232901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.232939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.232948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:19.862 [2024-11-17 01:42:28.232955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:19.862 [2024-11-17 01:42:28.232963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.232979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:19.862 [2024-11-17 01:42:28.235909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.235993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.862 [2024-11-17 01:42:28.236041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:22:19.862 [2024-11-17 01:42:28.236059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.236098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.236152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:19.862 [2024-11-17 01:42:28.236172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:19.862 [2024-11-17 01:42:28.236186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.236217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:19.862 [2024-11-17 01:42:28.236358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:19.862 [2024-11-17 01:42:28.236424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:19.862 [2024-11-17 01:42:28.236451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:19.862 [2024-11-17 01:42:28.236476] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:19.862 [2024-11-17 01:42:28.236526] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:19.862 [2024-11-17 01:42:28.236553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:19.862 [2024-11-17 01:42:28.236573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:19.862 [2024-11-17 01:42:28.236591] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:19.862 [2024-11-17 01:42:28.236606] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:19.862 [2024-11-17 01:42:28.236622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.236637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:19.862 [2024-11-17 01:42:28.236684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:22:19.862 [2024-11-17 01:42:28.236706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.236784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.862 [2024-11-17 01:42:28.236813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:19.862 [2024-11-17 01:42:28.236830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:19.862 [2024-11-17 01:42:28.236865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.862 [2024-11-17 01:42:28.236965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:19.862 [2024-11-17 01:42:28.236985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:19.862 [2024-11-17 01:42:28.237001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:19.863 [2024-11-17 01:42:28.237055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:19.863 [2024-11-17 01:42:28.237100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.863 [2024-11-17 01:42:28.237191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:19.863 [2024-11-17 01:42:28.237205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:19.863 [2024-11-17 01:42:28.237220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.863 [2024-11-17 01:42:28.237234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:19.863 [2024-11-17 01:42:28.237249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:19.863 [2024-11-17 01:42:28.237263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:19.863 [2024-11-17 01:42:28.237379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:19.863 [2024-11-17 01:42:28.237426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:19.863 [2024-11-17 01:42:28.237523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:19.863 [2024-11-17 01:42:28.237597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:19.863 [2024-11-17 01:42:28.237643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:19.863 [2024-11-17 01:42:28.237707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.863 [2024-11-17 01:42:28.237719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:19.863 [2024-11-17 01:42:28.237725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:19.863 [2024-11-17 01:42:28.237731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.863 [2024-11-17 01:42:28.237736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:19.863 [2024-11-17 01:42:28.237743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:19.863 [2024-11-17 01:42:28.237747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:19.863 [2024-11-17 01:42:28.237759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:19.863 [2024-11-17 01:42:28.237765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237769] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:19.863 [2024-11-17 01:42:28.237777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:19.863 [2024-11-17 01:42:28.237782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.863 [2024-11-17 01:42:28.237810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:19.863 [2024-11-17 01:42:28.237818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:19.863 [2024-11-17 01:42:28.237823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:19.863 [2024-11-17 01:42:28.237830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:19.863 [2024-11-17 01:42:28.237835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:19.863 [2024-11-17 01:42:28.237841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:19.863 [2024-11-17 01:42:28.237850] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:19.863 [2024-11-17 01:42:28.237859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.863 [2024-11-17 01:42:28.237868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:19.863 [2024-11-17 01:42:28.237874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:19.863 [2024-11-17 01:42:28.237880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:19.863 [2024-11-17 01:42:28.237887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:19.863 [2024-11-17 01:42:28.237892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:19.863 [2024-11-17 01:42:28.237899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:19.863 [2024-11-17 01:42:28.237905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:19.864 [2024-11-17 01:42:28.237912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:19.864 [2024-11-17 01:42:28.237917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:19.864 [2024-11-17 01:42:28.237925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:19.864 [2024-11-17 01:42:28.237957] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:19.864 [2024-11-17 01:42:28.237965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:19.864 [2024-11-17 01:42:28.237978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:19.864 [2024-11-17 01:42:28.237983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:19.864 [2024-11-17 01:42:28.237990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:19.864 [2024-11-17 01:42:28.237996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.864 [2024-11-17 01:42:28.238003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:19.864 [2024-11-17 01:42:28.238009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:22:19.864 [2024-11-17 01:42:28.238015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.864 [2024-11-17 01:42:28.238048] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:19.864 [2024-11-17 01:42:28.238057] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:23.267 [2024-11-17 01:42:31.571613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.571861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:23.267 [2024-11-17 01:42:31.571935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3333.552 ms 00:22:23.267 [2024-11-17 01:42:31.571962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.597909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.598068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.267 [2024-11-17 01:42:31.598134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.720 ms 00:22:23.267 [2024-11-17 01:42:31.598160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.598307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.598446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:23.267 [2024-11-17 01:42:31.598470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:23.267 [2024-11-17 01:42:31.598493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.629320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.629459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.267 [2024-11-17 01:42:31.629518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.773 ms 00:22:23.267 [2024-11-17 01:42:31.629543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.629583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.629611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.267 [2024-11-17 01:42:31.629631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:23.267 [2024-11-17 01:42:31.629651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.630066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.630164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.267 [2024-11-17 01:42:31.630214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:23.267 [2024-11-17 01:42:31.630240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.630355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.630378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.267 [2024-11-17 01:42:31.630400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:23.267 [2024-11-17 01:42:31.630423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.644757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.644895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.267 [2024-11-17 01:42:31.644911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.305 ms 00:22:23.267 [2024-11-17 01:42:31.644921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.267 [2024-11-17 01:42:31.656435] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:23.267 [2024-11-17 01:42:31.659327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.267 [2024-11-17 01:42:31.659359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:23.267 [2024-11-17 01:42:31.659371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.333 ms 00:22:23.267 [2024-11-17 01:42:31.659379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.745802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.745848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:23.529 [2024-11-17 01:42:31.745865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.385 ms 00:22:23.529 [2024-11-17 01:42:31.745873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.746060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.746074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:23.529 [2024-11-17 01:42:31.746088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:22:23.529 [2024-11-17 01:42:31.746096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.771268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.771306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:23.529 [2024-11-17 01:42:31.771321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:22:23.529 [2024-11-17 01:42:31.771329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.795486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.795526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:23.529 [2024-11-17 01:42:31.795541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.108 ms 00:22:23.529 [2024-11-17 01:42:31.795549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.796165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.796183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:23.529 [2024-11-17 01:42:31.796195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:22:23.529 [2024-11-17 01:42:31.796202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.883988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.884053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:23.529 [2024-11-17 01:42:31.884078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.734 ms 00:22:23.529 [2024-11-17 01:42:31.884087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.912121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.912181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:23.529 [2024-11-17 01:42:31.912198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.916 ms 00:22:23.529 [2024-11-17 01:42:31.912206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.939048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.939098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:23.529 [2024-11-17 01:42:31.939113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.777 ms 00:22:23.529 [2024-11-17 01:42:31.939120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.966524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.966575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:23.529 [2024-11-17 01:42:31.966590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.340 ms 00:22:23.529 [2024-11-17 01:42:31.966598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.966659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.966669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:23.529 [2024-11-17 01:42:31.966684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:23.529 [2024-11-17 01:42:31.966692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.966829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.529 [2024-11-17 01:42:31.966841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:23.529 [2024-11-17 01:42:31.966856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:23.529 [2024-11-17 01:42:31.966864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.529 [2024-11-17 01:42:31.968058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3741.307 ms, result 0 00:22:23.529 { 00:22:23.529 "name": "ftl0", 00:22:23.529 "uuid": "8762e556-2333-475f-8181-d698776a93fd" 00:22:23.529 } 00:22:23.790 01:42:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:23.790 01:42:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:23.791 01:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:23.791 01:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:23.791 01:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:24.052 /dev/nbd0 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:24.052 1+0 records in 00:22:24.052 1+0 records out 00:22:24.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572721 s, 7.2 MB/s 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:24.052 01:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:24.313 [2024-11-17 01:42:32.526155] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:24.313 [2024-11-17 01:42:32.526299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77809 ] 00:22:24.313 [2024-11-17 01:42:32.690232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.575 [2024-11-17 01:42:32.808157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.961  [2024-11-17T01:42:35.363Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-17T01:42:36.305Z] Copying: 387/1024 [MB] (194 MBps) [2024-11-17T01:42:37.242Z] Copying: 582/1024 [MB] (194 MBps) [2024-11-17T01:42:38.176Z] Copying: 795/1024 [MB] (213 MBps) [2024-11-17T01:42:38.745Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:22:30.286 00:22:30.286 01:42:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:32.820 01:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:32.820 [2024-11-17 01:42:40.708292] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:32.820 [2024-11-17 01:42:40.708382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77899 ] 00:22:32.820 [2024-11-17 01:42:40.859568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.820 [2024-11-17 01:42:40.967230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.759  [2024-11-17T01:42:43.596Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-17T01:42:44.529Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-17T01:42:45.459Z] Copying: 40/1024 [MB] (16 MBps) [2024-11-17T01:42:46.392Z] Copying: 57/1024 [MB] (17 MBps) [2024-11-17T01:42:47.326Z] Copying: 72/1024 [MB] (15 MBps) [2024-11-17T01:42:48.258Z] Copying: 88/1024 [MB] (16 MBps) [2024-11-17T01:42:49.631Z] Copying: 105/1024 [MB] (16 MBps) [2024-11-17T01:42:50.566Z] Copying: 120/1024 [MB] (15 MBps) [2024-11-17T01:42:51.501Z] Copying: 135/1024 [MB] (14 MBps) [2024-11-17T01:42:52.437Z] Copying: 152/1024 [MB] (17 MBps) [2024-11-17T01:42:53.371Z] Copying: 167/1024 [MB] (14 MBps) [2024-11-17T01:42:54.306Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-17T01:42:55.240Z] Copying: 189/1024 [MB] (11 MBps) [2024-11-17T01:42:56.614Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-17T01:42:57.550Z] Copying: 216/1024 [MB] (15 MBps) [2024-11-17T01:42:58.484Z] Copying: 231/1024 [MB] (15 MBps) [2024-11-17T01:42:59.423Z] Copying: 249/1024 [MB] (18 MBps) [2024-11-17T01:43:00.356Z] Copying: 268/1024 [MB] (18 MBps) [2024-11-17T01:43:01.290Z] Copying: 286/1024 [MB] (17 MBps) [2024-11-17T01:43:02.223Z] Copying: 301/1024 [MB] (15 MBps) [2024-11-17T01:43:03.596Z] Copying: 315/1024 [MB] (14 MBps) [2024-11-17T01:43:04.529Z] Copying: 330/1024 [MB] (14 MBps) [2024-11-17T01:43:05.463Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-17T01:43:06.529Z] Copying: 357/1024 [MB] (15 MBps) [2024-11-17T01:43:07.463Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-17T01:43:08.396Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-17T01:43:09.330Z] Copying: 404/1024 [MB] (24 MBps) [2024-11-17T01:43:10.263Z] Copying: 438/1024 [MB] (33 MBps) [2024-11-17T01:43:11.639Z] Copying: 473/1024 [MB] (34 MBps) [2024-11-17T01:43:12.205Z] Copying: 507/1024 [MB] (34 MBps) [2024-11-17T01:43:13.579Z] Copying: 542/1024 [MB] (34 MBps) [2024-11-17T01:43:14.514Z] Copying: 574/1024 [MB] (32 MBps) [2024-11-17T01:43:15.448Z] Copying: 588/1024 [MB] (14 MBps) [2024-11-17T01:43:16.382Z] Copying: 602/1024 [MB] (13 MBps) [2024-11-17T01:43:17.316Z] Copying: 630/1024 [MB] (28 MBps) [2024-11-17T01:43:18.274Z] Copying: 652/1024 [MB] (21 MBps) [2024-11-17T01:43:19.208Z] Copying: 668/1024 [MB] (16 MBps) [2024-11-17T01:43:20.587Z] Copying: 687/1024 [MB] (18 MBps) [2024-11-17T01:43:21.538Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-17T01:43:22.480Z] Copying: 722/1024 [MB] (21 MBps) [2024-11-17T01:43:23.421Z] Copying: 734/1024 [MB] (12 MBps) [2024-11-17T01:43:24.356Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-17T01:43:25.291Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-17T01:43:26.225Z] Copying: 772/1024 [MB] (15 MBps) [2024-11-17T01:43:27.601Z] Copying: 785/1024 [MB] (13 MBps) [2024-11-17T01:43:28.535Z] Copying: 798/1024 [MB] (12 MBps) [2024-11-17T01:43:29.469Z] Copying: 813/1024 [MB] (14 MBps) [2024-11-17T01:43:30.403Z] Copying: 832/1024 [MB] (19 MBps) [2024-11-17T01:43:31.337Z] Copying: 850/1024 [MB] (17 MBps) [2024-11-17T01:43:32.271Z] Copying: 862/1024 [MB] (11 MBps) [2024-11-17T01:43:33.207Z] Copying: 876/1024 [MB] (14 MBps) [2024-11-17T01:43:34.580Z] Copying: 891/1024 [MB] (14 MBps) [2024-11-17T01:43:35.515Z] Copying: 908/1024 [MB] (16 MBps) [2024-11-17T01:43:36.449Z] Copying: 939/1024 [MB] (31 MBps) [2024-11-17T01:43:37.384Z] Copying: 972/1024 [MB] (33 MBps) [2024-11-17T01:43:37.951Z] Copying: 1007/1024 [MB] (35 MBps) [2024-11-17T01:43:38.518Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:23:30.059 00:23:30.059 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:30.059 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:30.059 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:30.321 [2024-11-17 01:43:38.683594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.683633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.321 [2024-11-17 01:43:38.683644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:30.321 [2024-11-17 01:43:38.683652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.683670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.321 [2024-11-17 01:43:38.685735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.685860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.321 [2024-11-17 01:43:38.685877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:23:30.321 [2024-11-17 01:43:38.685884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.687930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.687952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.321 [2024-11-17 01:43:38.687962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:23:30.321 [2024-11-17 01:43:38.687968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.700779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.700814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.321 [2024-11-17 01:43:38.700824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:23:30.321 [2024-11-17 01:43:38.700831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.705728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.705752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.321 [2024-11-17 01:43:38.705762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:23:30.321 [2024-11-17 01:43:38.705769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.724360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.724465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.321 [2024-11-17 01:43:38.724481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.530 ms 00:23:30.321 [2024-11-17 01:43:38.724487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.736441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.736467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.321 [2024-11-17 01:43:38.736477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.925 ms 00:23:30.321 [2024-11-17 01:43:38.736486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.736589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.736596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.321 [2024-11-17 01:43:38.736605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:30.321 [2024-11-17 01:43:38.736611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.754402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.754426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:30.321 [2024-11-17 01:43:38.754435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.776 ms 00:23:30.321 [2024-11-17 01:43:38.754441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.321 [2024-11-17 01:43:38.772721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.321 [2024-11-17 01:43:38.772745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:30.321 [2024-11-17 01:43:38.772754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.251 ms 00:23:30.321 [2024-11-17 01:43:38.772760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.582 [2024-11-17 01:43:38.790741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.582 [2024-11-17 01:43:38.790851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.582 [2024-11-17 01:43:38.790866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.950 ms 00:23:30.582 [2024-11-17 01:43:38.790873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.582 [2024-11-17 01:43:38.808047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.582 [2024-11-17 01:43:38.808071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.582 [2024-11-17 01:43:38.808080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.095 ms 00:23:30.582 [2024-11-17 01:43:38.808086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.582 [2024-11-17 01:43:38.808114] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.582 [2024-11-17 01:43:38.808124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.582 [2024-11-17 01:43:38.808203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.583 [2024-11-17 01:43:38.808665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.584 [2024-11-17 01:43:38.808775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.584 [2024-11-17 01:43:38.808782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8762e556-2333-475f-8181-d698776a93fd 00:23:30.584 [2024-11-17 01:43:38.808803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:30.584 [2024-11-17 01:43:38.808813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:30.584 [2024-11-17 01:43:38.808818] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:30.584 [2024-11-17 01:43:38.808826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:30.584 [2024-11-17 01:43:38.808832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.584 [2024-11-17 01:43:38.808839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.584 [2024-11-17 01:43:38.808844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.584 [2024-11-17 01:43:38.808850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.584 [2024-11-17 01:43:38.808854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.584 [2024-11-17 01:43:38.808861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.584 [2024-11-17 01:43:38.808867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.584 [2024-11-17 01:43:38.808875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:23:30.584 [2024-11-17 01:43:38.808880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.818460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.584 [2024-11-17 01:43:38.818552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.584 [2024-11-17 01:43:38.818567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.555 ms 00:23:30.584 [2024-11-17 01:43:38.818573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.818861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.584 [2024-11-17 01:43:38.818868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.584 [2024-11-17 01:43:38.818876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:23:30.584 [2024-11-17 01:43:38.818881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.851557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.851585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.584 [2024-11-17 01:43:38.851595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.851600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.851646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.851652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.584 [2024-11-17 01:43:38.851659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.851664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.851715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.851723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.584 [2024-11-17 01:43:38.851732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.851738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.851753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.851759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.584 [2024-11-17 01:43:38.851766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.851772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.911650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.911687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.584 [2024-11-17 01:43:38.911696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.911702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.584 [2024-11-17 01:43:38.960157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.584 [2024-11-17 01:43:38.960250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.584 [2024-11-17 01:43:38.960309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.584 [2024-11-17 01:43:38.960399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:30.584 [2024-11-17 01:43:38.960449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.584 [2024-11-17 01:43:38.960496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.584 [2024-11-17 01:43:38.960547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.584 [2024-11-17 01:43:38.960554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.584 [2024-11-17 01:43:38.960559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.584 [2024-11-17 01:43:38.960659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.039 ms, result 0 00:23:30.584 true 00:23:30.584 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77667 00:23:30.585 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77667 00:23:30.585 01:43:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:30.857 [2024-11-17 01:43:39.052854] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:30.857 [2024-11-17 01:43:39.052968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78507 ] 00:23:30.857 [2024-11-17 01:43:39.207338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.857 [2024-11-17 01:43:39.284987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.340  [2024-11-17T01:43:41.739Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-17T01:43:42.681Z] Copying: 521/1024 [MB] (260 MBps) [2024-11-17T01:43:43.622Z] Copying: 784/1024 [MB] (263 MBps) [2024-11-17T01:43:44.192Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:23:35.733 00:23:35.733 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77667 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:35.733 01:43:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.733 [2024-11-17 01:43:44.002577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:35.733 [2024-11-17 01:43:44.002929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78561 ] 00:23:35.733 [2024-11-17 01:43:44.159296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.994 [2024-11-17 01:43:44.238755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.994 [2024-11-17 01:43:44.445184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.994 [2024-11-17 01:43:44.445348] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.255 [2024-11-17 01:43:44.508896] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:36.255 [2024-11-17 01:43:44.509679] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:36.255 [2024-11-17 01:43:44.510240] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:36.827 [2024-11-17 01:43:45.027459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.827 [2024-11-17 01:43:45.027523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.828 [2024-11-17 01:43:45.027539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.828 [2024-11-17 01:43:45.027548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.027610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.027621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.828 [2024-11-17 01:43:45.027631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:36.828 [2024-11-17 01:43:45.027638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.027658] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.828 [2024-11-17 01:43:45.028768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.828 [2024-11-17 01:43:45.028840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.028851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.828 [2024-11-17 01:43:45.028862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:23:36.828 [2024-11-17 01:43:45.028871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.030705] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.828 [2024-11-17 01:43:45.044842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.044897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.828 [2024-11-17 01:43:45.044911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:23:36.828 [2024-11-17 01:43:45.044920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.044997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.045007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.828 [2024-11-17 01:43:45.045016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:36.828 [2024-11-17 01:43:45.045024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.053320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.053364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.828 [2024-11-17 01:43:45.053375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.218 ms 00:23:36.828 [2024-11-17 01:43:45.053383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.053467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.053476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.828 [2024-11-17 01:43:45.053484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:36.828 [2024-11-17 01:43:45.053492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.053541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.053551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.828 [2024-11-17 01:43:45.053559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.828 [2024-11-17 01:43:45.053566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.053590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.828 [2024-11-17 01:43:45.057650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.057864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.828 [2024-11-17 01:43:45.057886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:23:36.828 [2024-11-17 01:43:45.057895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.057934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.057943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.828 [2024-11-17 01:43:45.057952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:36.828 [2024-11-17 01:43:45.057959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.058019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.828 [2024-11-17 01:43:45.058042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.828 [2024-11-17 01:43:45.058079] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.828 [2024-11-17 01:43:45.058101] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.828 [2024-11-17 01:43:45.058206] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.828 [2024-11-17 01:43:45.058218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.828 [2024-11-17 01:43:45.058229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.828 [2024-11-17 01:43:45.058242] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058251] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058259] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.828 [2024-11-17 01:43:45.058267] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.828 [2024-11-17 01:43:45.058275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.828 [2024-11-17 01:43:45.058283] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.828 [2024-11-17 01:43:45.058291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.058299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.828 [2024-11-17 01:43:45.058307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:36.828 [2024-11-17 01:43:45.058314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.058399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.828 [2024-11-17 01:43:45.058410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.828 [2024-11-17 01:43:45.058418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:36.828 [2024-11-17 01:43:45.058426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.828 [2024-11-17 01:43:45.058527] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.828 [2024-11-17 01:43:45.058538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.828 [2024-11-17 01:43:45.058546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.828 [2024-11-17 01:43:45.058569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.828 [2024-11-17 01:43:45.058592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.828 [2024-11-17 01:43:45.058606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.828 [2024-11-17 01:43:45.058619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.828 [2024-11-17 01:43:45.058626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.828 [2024-11-17 01:43:45.058633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.828 [2024-11-17 01:43:45.058640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.828 [2024-11-17 01:43:45.058647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.828 [2024-11-17 01:43:45.058659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.828 [2024-11-17 01:43:45.058682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.828 [2024-11-17 01:43:45.058702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.828 [2024-11-17 01:43:45.058723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.828 [2024-11-17 01:43:45.058743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.828 [2024-11-17 01:43:45.058749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.828 [2024-11-17 01:43:45.058757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.829 [2024-11-17 01:43:45.058763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.829 [2024-11-17 01:43:45.058769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.829 [2024-11-17 01:43:45.058776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.829 [2024-11-17 01:43:45.058782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.829 [2024-11-17 01:43:45.058810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.829 [2024-11-17 01:43:45.058818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.829 [2024-11-17 01:43:45.058825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.829 [2024-11-17 01:43:45.058832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.829 [2024-11-17 01:43:45.058839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.829 [2024-11-17 01:43:45.058846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.829 [2024-11-17 01:43:45.058852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.829 [2024-11-17 01:43:45.058860] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.829 [2024-11-17 01:43:45.058869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.829 [2024-11-17 01:43:45.058880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.829 [2024-11-17 01:43:45.058888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.829 [2024-11-17 01:43:45.058895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.829 [2024-11-17 01:43:45.058902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.829 [2024-11-17 01:43:45.058909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.829 [2024-11-17 01:43:45.058916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.829 [2024-11-17 01:43:45.058923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.829 [2024-11-17 01:43:45.058932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.829 [2024-11-17 01:43:45.058941] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.829 [2024-11-17 01:43:45.058951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.058960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.829 [2024-11-17 01:43:45.058967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.829 [2024-11-17 01:43:45.058974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.829 [2024-11-17 01:43:45.058981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.829 [2024-11-17 01:43:45.058988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.829 [2024-11-17 01:43:45.058996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.829 [2024-11-17 01:43:45.059003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.829 [2024-11-17 01:43:45.059010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.829 [2024-11-17 01:43:45.059017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.829 [2024-11-17 01:43:45.059024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.829 [2024-11-17 01:43:45.059061] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.829 [2024-11-17 01:43:45.059069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.829 [2024-11-17 01:43:45.059085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.829 [2024-11-17 01:43:45.059092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.829 [2024-11-17 01:43:45.059099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.829 [2024-11-17 01:43:45.059107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.059114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.829 [2024-11-17 01:43:45.059123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:23:36.829 [2024-11-17 01:43:45.059131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.091698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.091930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.829 [2024-11-17 01:43:45.091995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.520 ms 00:23:36.829 [2024-11-17 01:43:45.092021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.092132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.092155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.829 [2024-11-17 01:43:45.092221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:36.829 [2024-11-17 01:43:45.092244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.140449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.140690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.829 [2024-11-17 01:43:45.140768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.126 ms 00:23:36.829 [2024-11-17 01:43:45.140813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.140890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.140916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.829 [2024-11-17 01:43:45.140936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.829 [2024-11-17 01:43:45.140965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.141584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.141731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.829 [2024-11-17 01:43:45.141804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:23:36.829 [2024-11-17 01:43:45.141867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.142050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.142112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.829 [2024-11-17 01:43:45.142136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:36.829 [2024-11-17 01:43:45.142178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.158039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.158228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.829 [2024-11-17 01:43:45.158298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.820 ms 00:23:36.829 [2024-11-17 01:43:45.158565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.173251] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.829 [2024-11-17 01:43:45.173445] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.829 [2024-11-17 01:43:45.173512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.173535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.829 [2024-11-17 01:43:45.173557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.770 ms 00:23:36.829 [2024-11-17 01:43:45.173577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.199860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.200036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.829 [2024-11-17 01:43:45.200124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.202 ms 00:23:36.829 [2024-11-17 01:43:45.200148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.213137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.213316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.829 [2024-11-17 01:43:45.213376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:23:36.829 [2024-11-17 01:43:45.213398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.236119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.236339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.829 [2024-11-17 01:43:45.236413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.372 ms 00:23:36.829 [2024-11-17 01:43:45.236437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.829 [2024-11-17 01:43:45.237257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.829 [2024-11-17 01:43:45.237408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.829 [2024-11-17 01:43:45.237472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:23:36.829 [2024-11-17 01:43:45.237494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.302635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.302939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:37.091 [2024-11-17 01:43:45.303182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.103 ms 00:23:37.091 [2024-11-17 01:43:45.303224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.315146] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:37.091 [2024-11-17 01:43:45.318782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.319050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.091 [2024-11-17 01:43:45.319123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.478 ms 00:23:37.091 [2024-11-17 01:43:45.319156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.319291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.319319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:37.091 [2024-11-17 01:43:45.319340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:37.091 [2024-11-17 01:43:45.319361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.319467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.319655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.091 [2024-11-17 01:43:45.319681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:37.091 [2024-11-17 01:43:45.319700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.319749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.319772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.091 [2024-11-17 01:43:45.319816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:37.091 [2024-11-17 01:43:45.319838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.319887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:37.091 [2024-11-17 01:43:45.319979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.319992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:37.091 [2024-11-17 01:43:45.320003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:37.091 [2024-11-17 01:43:45.320016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.346087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.346273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.091 [2024-11-17 01:43:45.346296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 00:23:37.091 [2024-11-17 01:43:45.346307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.346492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.091 [2024-11-17 01:43:45.346518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.091 [2024-11-17 01:43:45.346529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:37.091 [2024-11-17 01:43:45.346538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.091 [2024-11-17 01:43:45.347887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.869 ms, result 0 00:23:38.036  [2024-11-17T01:43:47.439Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:43:48.383Z] Copying: 30/1024 [MB] (20 MBps) [2024-11-17T01:43:49.771Z] Copying: 44/1024 [MB] (13 MBps) [2024-11-17T01:43:50.714Z] Copying: 54/1024 [MB] (10 MBps) [2024-11-17T01:43:51.655Z] Copying: 75/1024 [MB] (20 MBps) [2024-11-17T01:43:52.599Z] Copying: 104/1024 [MB] (29 MBps) [2024-11-17T01:43:53.543Z] Copying: 131/1024 [MB] (26 MBps) [2024-11-17T01:43:54.489Z] Copying: 176/1024 [MB] (45 MBps) [2024-11-17T01:43:55.434Z] Copying: 194/1024 [MB] (17 MBps) [2024-11-17T01:43:56.380Z] Copying: 206/1024 [MB] (12 MBps) [2024-11-17T01:43:57.767Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-17T01:43:58.712Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-17T01:43:59.655Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-17T01:44:00.597Z] Copying: 253/1024 [MB] (13 MBps) [2024-11-17T01:44:01.542Z] Copying: 266/1024 [MB] (13 MBps) [2024-11-17T01:44:02.485Z] Copying: 284/1024 [MB] (18 MBps) [2024-11-17T01:44:03.429Z] Copying: 295/1024 [MB] (11 MBps) [2024-11-17T01:44:04.374Z] Copying: 311/1024 [MB] (16 MBps) [2024-11-17T01:44:05.761Z] Copying: 336/1024 [MB] (24 MBps) [2024-11-17T01:44:06.705Z] Copying: 359/1024 [MB] (23 MBps) [2024-11-17T01:44:07.647Z] Copying: 383/1024 [MB] (24 MBps) [2024-11-17T01:44:08.589Z] Copying: 407/1024 [MB] (24 MBps) [2024-11-17T01:44:09.533Z] Copying: 438/1024 [MB] (30 MBps) [2024-11-17T01:44:10.478Z] Copying: 464/1024 [MB] (26 MBps) [2024-11-17T01:44:11.422Z] Copying: 477/1024 [MB] (13 MBps) [2024-11-17T01:44:12.365Z] Copying: 496/1024 [MB] (19 MBps) [2024-11-17T01:44:13.751Z] Copying: 522/1024 [MB] (26 MBps) [2024-11-17T01:44:14.732Z] Copying: 547/1024 [MB] (24 MBps) [2024-11-17T01:44:15.723Z] Copying: 571/1024 [MB] (23 MBps) [2024-11-17T01:44:16.667Z] Copying: 587/1024 [MB] (15 MBps) [2024-11-17T01:44:17.617Z] Copying: 604/1024 [MB] (16 MBps) [2024-11-17T01:44:18.558Z] Copying: 629/1024 [MB] (25 MBps) [2024-11-17T01:44:19.500Z] Copying: 654/1024 [MB] (25 MBps) [2024-11-17T01:44:20.442Z] Copying: 679/1024 [MB] (24 MBps) [2024-11-17T01:44:21.386Z] Copying: 703/1024 [MB] (24 MBps) [2024-11-17T01:44:22.775Z] Copying: 726/1024 [MB] (23 MBps) [2024-11-17T01:44:23.720Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-17T01:44:24.662Z] Copying: 752/1024 [MB] (13 MBps) [2024-11-17T01:44:25.605Z] Copying: 771/1024 [MB] (18 MBps) [2024-11-17T01:44:26.549Z] Copying: 794/1024 [MB] (22 MBps) [2024-11-17T01:44:27.493Z] Copying: 818/1024 [MB] (24 MBps) [2024-11-17T01:44:28.438Z] Copying: 842/1024 [MB] (23 MBps) [2024-11-17T01:44:29.387Z] Copying: 868/1024 [MB] (25 MBps) [2024-11-17T01:44:30.778Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-17T01:44:31.722Z] Copying: 897/1024 [MB] (18 MBps) [2024-11-17T01:44:32.666Z] Copying: 907/1024 [MB] (10 MBps) [2024-11-17T01:44:33.610Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-17T01:44:34.552Z] Copying: 955/1024 [MB] (36 MBps) [2024-11-17T01:44:35.493Z] Copying: 979/1024 [MB] (24 MBps) [2024-11-17T01:44:36.433Z] Copying: 1003/1024 [MB] (24 MBps) [2024-11-17T01:44:37.004Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-17T01:44:37.004Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-17 01:44:36.859482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.859596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.545 [2024-11-17 01:44:36.859692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:28.545 [2024-11-17 01:44:36.859703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.862393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.545 [2024-11-17 01:44:36.865315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.865343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.545 [2024-11-17 01:44:36.865352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:24:28.545 [2024-11-17 01:44:36.865363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.875890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.875990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.545 [2024-11-17 01:44:36.876036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.923 ms 00:24:28.545 [2024-11-17 01:44:36.876054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.894051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.894146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.545 [2024-11-17 01:44:36.894192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.972 ms 00:24:28.545 [2024-11-17 01:44:36.894210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.898937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.899025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.545 [2024-11-17 01:44:36.899070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:24:28.545 [2024-11-17 01:44:36.899087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.917244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.917337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.545 [2024-11-17 01:44:36.917350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.109 ms 00:24:28.545 [2024-11-17 01:44:36.917356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.928623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.928718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.545 [2024-11-17 01:44:36.928731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.245 ms 00:24:28.545 [2024-11-17 01:44:36.928737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.545 [2024-11-17 01:44:36.989721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.545 [2024-11-17 01:44:36.989762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.545 [2024-11-17 01:44:36.989775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.966 ms 00:24:28.546 [2024-11-17 01:44:36.989781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.807 [2024-11-17 01:44:37.008008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.807 [2024-11-17 01:44:37.008033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:28.807 [2024-11-17 01:44:37.008041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.204 ms 00:24:28.807 [2024-11-17 01:44:37.008047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.807 [2024-11-17 01:44:37.026216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.807 [2024-11-17 01:44:37.026240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:28.807 [2024-11-17 01:44:37.026248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.144 ms 00:24:28.807 [2024-11-17 01:44:37.026253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.807 [2024-11-17 01:44:37.044181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.807 [2024-11-17 01:44:37.044211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.807 [2024-11-17 01:44:37.044219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.902 ms 00:24:28.807 [2024-11-17 01:44:37.044225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.807 [2024-11-17 01:44:37.061722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.807 [2024-11-17 01:44:37.061747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.807 [2024-11-17 01:44:37.061755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.454 ms 00:24:28.807 [2024-11-17 01:44:37.061760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.807 [2024-11-17 01:44:37.061785] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.807 [2024-11-17 01:44:37.061807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100096 / 261120 wr_cnt: 1 state: open 00:24:28.807 [2024-11-17 01:44:37.061816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.807 [2024-11-17 01:44:37.061911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.061994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.808 [2024-11-17 01:44:37.062404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.808 [2024-11-17 01:44:37.062410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8762e556-2333-475f-8181-d698776a93fd 00:24:28.808 [2024-11-17 01:44:37.062418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100096 00:24:28.808 [2024-11-17 01:44:37.062424] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101056 00:24:28.808 [2024-11-17 01:44:37.062434] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100096 00:24:28.808 [2024-11-17 01:44:37.062440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:24:28.808 [2024-11-17 01:44:37.062446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.808 [2024-11-17 01:44:37.062451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.809 [2024-11-17 01:44:37.062457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.809 [2024-11-17 01:44:37.062463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.809 [2024-11-17 01:44:37.062468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.809 [2024-11-17 01:44:37.062474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.809 [2024-11-17 01:44:37.062479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.809 [2024-11-17 01:44:37.062485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:24:28.809 [2024-11-17 01:44:37.062491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.071773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.809 [2024-11-17 01:44:37.071886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.809 [2024-11-17 01:44:37.071897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.271 ms 00:24:28.809 [2024-11-17 01:44:37.071903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.072173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.809 [2024-11-17 01:44:37.072188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.809 [2024-11-17 01:44:37.072195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:28.809 [2024-11-17 01:44:37.072205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.097822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.097848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.809 [2024-11-17 01:44:37.097856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.097862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.097898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.097905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.809 [2024-11-17 01:44:37.097912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.097920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.097962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.097969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.809 [2024-11-17 01:44:37.097975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.097981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.097992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.097998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.809 [2024-11-17 01:44:37.098004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.098009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.156368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.156398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.809 [2024-11-17 01:44:37.156406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.156412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.809 [2024-11-17 01:44:37.204672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.809 [2024-11-17 01:44:37.204731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.809 [2024-11-17 01:44:37.204804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.809 [2024-11-17 01:44:37.204898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.809 [2024-11-17 01:44:37.204940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.204975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.204982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.809 [2024-11-17 01:44:37.204989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.204994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.205024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.809 [2024-11-17 01:44:37.205031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.809 [2024-11-17 01:44:37.205037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.809 [2024-11-17 01:44:37.205043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.809 [2024-11-17 01:44:37.205131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.499 ms, result 0 00:24:30.195 00:24:30.195 00:24:30.195 01:44:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:32.110 01:44:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.110 [2024-11-17 01:44:40.456003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:32.110 [2024-11-17 01:44:40.456090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79141 ] 00:24:32.371 [2024-11-17 01:44:40.605541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.371 [2024-11-17 01:44:40.680785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.633 [2024-11-17 01:44:40.887954] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:32.633 [2024-11-17 01:44:40.888092] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:32.633 [2024-11-17 01:44:41.042136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.042171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:32.633 [2024-11-17 01:44:41.042185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:32.633 [2024-11-17 01:44:41.042191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.042227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.042235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.633 [2024-11-17 01:44:41.042243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:32.633 [2024-11-17 01:44:41.042248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.042261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:32.633 [2024-11-17 01:44:41.042829] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:32.633 [2024-11-17 01:44:41.042842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.042848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.633 [2024-11-17 01:44:41.042855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:24:32.633 [2024-11-17 01:44:41.042861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.043810] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:32.633 [2024-11-17 01:44:41.054015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.054150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:32.633 [2024-11-17 01:44:41.054164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.207 ms 00:24:32.633 [2024-11-17 01:44:41.054171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.054213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.054220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:32.633 [2024-11-17 01:44:41.054227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:32.633 [2024-11-17 01:44:41.054232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.058600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.058625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.633 [2024-11-17 01:44:41.058633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:24:32.633 [2024-11-17 01:44:41.058639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.058694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.058701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.633 [2024-11-17 01:44:41.058708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:32.633 [2024-11-17 01:44:41.058713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.058746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.058754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:32.633 [2024-11-17 01:44:41.058760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:32.633 [2024-11-17 01:44:41.058766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.058779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:32.633 [2024-11-17 01:44:41.061566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.061679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.633 [2024-11-17 01:44:41.061691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:24:32.633 [2024-11-17 01:44:41.061700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.061728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.061734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:32.633 [2024-11-17 01:44:41.061740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:32.633 [2024-11-17 01:44:41.061746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.061759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:32.633 [2024-11-17 01:44:41.061773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:32.633 [2024-11-17 01:44:41.061811] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:32.633 [2024-11-17 01:44:41.061826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:32.633 [2024-11-17 01:44:41.061906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:32.633 [2024-11-17 01:44:41.061914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:32.633 [2024-11-17 01:44:41.061922] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:32.633 [2024-11-17 01:44:41.061930] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:32.633 [2024-11-17 01:44:41.061937] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:32.633 [2024-11-17 01:44:41.061943] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:32.633 [2024-11-17 01:44:41.061949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:32.633 [2024-11-17 01:44:41.061955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:32.633 [2024-11-17 01:44:41.061960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:32.633 [2024-11-17 01:44:41.061970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.061975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:32.633 [2024-11-17 01:44:41.061981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:24:32.633 [2024-11-17 01:44:41.061987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.062050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.633 [2024-11-17 01:44:41.062057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:32.633 [2024-11-17 01:44:41.062064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:32.633 [2024-11-17 01:44:41.062069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.633 [2024-11-17 01:44:41.062143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:32.633 [2024-11-17 01:44:41.062152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:32.633 [2024-11-17 01:44:41.062159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.633 [2024-11-17 01:44:41.062164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.633 [2024-11-17 01:44:41.062170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:32.633 [2024-11-17 01:44:41.062175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:32.633 [2024-11-17 01:44:41.062181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:32.633 [2024-11-17 01:44:41.062186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:32.633 [2024-11-17 01:44:41.062192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:32.633 [2024-11-17 01:44:41.062197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.633 [2024-11-17 01:44:41.062204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:32.633 [2024-11-17 01:44:41.062210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:32.633 [2024-11-17 01:44:41.062215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.633 [2024-11-17 01:44:41.062220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:32.633 [2024-11-17 01:44:41.062225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:32.633 [2024-11-17 01:44:41.062234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.633 [2024-11-17 01:44:41.062239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:32.634 [2024-11-17 01:44:41.062244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:32.634 [2024-11-17 01:44:41.062258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:32.634 [2024-11-17 01:44:41.062273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:32.634 [2024-11-17 01:44:41.062288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:32.634 [2024-11-17 01:44:41.062302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:32.634 [2024-11-17 01:44:41.062317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.634 [2024-11-17 01:44:41.062328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:32.634 [2024-11-17 01:44:41.062332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:32.634 [2024-11-17 01:44:41.062337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.634 [2024-11-17 01:44:41.062342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:32.634 [2024-11-17 01:44:41.062346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:32.634 [2024-11-17 01:44:41.062351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:32.634 [2024-11-17 01:44:41.062361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:32.634 [2024-11-17 01:44:41.062369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:32.634 [2024-11-17 01:44:41.062381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:32.634 [2024-11-17 01:44:41.062387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.634 [2024-11-17 01:44:41.062398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:32.634 [2024-11-17 01:44:41.062403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:32.634 [2024-11-17 01:44:41.062408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:32.634 [2024-11-17 01:44:41.062413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:32.634 [2024-11-17 01:44:41.062418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:32.634 [2024-11-17 01:44:41.062423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:32.634 [2024-11-17 01:44:41.062430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:32.634 [2024-11-17 01:44:41.062437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:32.634 [2024-11-17 01:44:41.062449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:32.634 [2024-11-17 01:44:41.062454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:32.634 [2024-11-17 01:44:41.062459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:32.634 [2024-11-17 01:44:41.062464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:32.634 [2024-11-17 01:44:41.062470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:32.634 [2024-11-17 01:44:41.062476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:32.634 [2024-11-17 01:44:41.062481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:32.634 [2024-11-17 01:44:41.062486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:32.634 [2024-11-17 01:44:41.062491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:32.634 [2024-11-17 01:44:41.062519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:32.634 [2024-11-17 01:44:41.062527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:32.634 [2024-11-17 01:44:41.062538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:32.634 [2024-11-17 01:44:41.062544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:32.634 [2024-11-17 01:44:41.062551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:32.634 [2024-11-17 01:44:41.062557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.634 [2024-11-17 01:44:41.062562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:32.634 [2024-11-17 01:44:41.062568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:24:32.634 [2024-11-17 01:44:41.062574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.634 [2024-11-17 01:44:41.083784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.634 [2024-11-17 01:44:41.083905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.634 [2024-11-17 01:44:41.083948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.177 ms 00:24:32.634 [2024-11-17 01:44:41.083966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.634 [2024-11-17 01:44:41.084049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.634 [2024-11-17 01:44:41.084069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:32.634 [2024-11-17 01:44:41.084084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:32.634 [2024-11-17 01:44:41.084098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.120420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.120534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.896 [2024-11-17 01:44:41.120583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.268 ms 00:24:32.896 [2024-11-17 01:44:41.120601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.120642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.120660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.896 [2024-11-17 01:44:41.120676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:32.896 [2024-11-17 01:44:41.120695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.121022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.121055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.896 [2024-11-17 01:44:41.121073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:24:32.896 [2024-11-17 01:44:41.121087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.121202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.121231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.896 [2024-11-17 01:44:41.121248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:32.896 [2024-11-17 01:44:41.121262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.131851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.131937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.896 [2024-11-17 01:44:41.131976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.502 ms 00:24:32.896 [2024-11-17 01:44:41.131996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.141661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:32.896 [2024-11-17 01:44:41.141764] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:32.896 [2024-11-17 01:44:41.141826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.141842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:32.896 [2024-11-17 01:44:41.141877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.754 ms 00:24:32.896 [2024-11-17 01:44:41.141893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.160337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.160434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:32.896 [2024-11-17 01:44:41.160473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.411 ms 00:24:32.896 [2024-11-17 01:44:41.160491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.169261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.169352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:32.896 [2024-11-17 01:44:41.169397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.734 ms 00:24:32.896 [2024-11-17 01:44:41.169413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.178104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.178202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:32.896 [2024-11-17 01:44:41.178245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.662 ms 00:24:32.896 [2024-11-17 01:44:41.178262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.178719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.178802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:32.896 [2024-11-17 01:44:41.178843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:24:32.896 [2024-11-17 01:44:41.178862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.223310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.224942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:32.896 [2024-11-17 01:44:41.224962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.424 ms 00:24:32.896 [2024-11-17 01:44:41.224969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.233360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:32.896 [2024-11-17 01:44:41.235180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.235206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:32.896 [2024-11-17 01:44:41.235215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.880 ms 00:24:32.896 [2024-11-17 01:44:41.235223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.235286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.235295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:32.896 [2024-11-17 01:44:41.235302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:32.896 [2024-11-17 01:44:41.235310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.236337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.236364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:32.896 [2024-11-17 01:44:41.236373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:24:32.896 [2024-11-17 01:44:41.236379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.236409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.236416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:32.896 [2024-11-17 01:44:41.236423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:32.896 [2024-11-17 01:44:41.236429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.236454] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:32.896 [2024-11-17 01:44:41.236464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.236470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:32.896 [2024-11-17 01:44:41.236476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:32.896 [2024-11-17 01:44:41.236482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.254914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.254943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:32.896 [2024-11-17 01:44:41.254952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.419 ms 00:24:32.896 [2024-11-17 01:44:41.254962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.255018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.896 [2024-11-17 01:44:41.255026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:32.896 [2024-11-17 01:44:41.255033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:32.896 [2024-11-17 01:44:41.255039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.896 [2024-11-17 01:44:41.255947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.461 ms, result 0 00:24:34.282  [2024-11-17T01:44:43.685Z] Copying: 1112/1048576 [kB] (1112 kBps) [2024-11-17T01:44:44.629Z] Copying: 4544/1048576 [kB] (3432 kBps) [2024-11-17T01:44:45.572Z] Copying: 21/1024 [MB] (16 MBps) [2024-11-17T01:44:46.512Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-17T01:44:47.455Z] Copying: 71/1024 [MB] (29 MBps) [2024-11-17T01:44:48.398Z] Copying: 100/1024 [MB] (29 MBps) [2024-11-17T01:44:49.447Z] Copying: 116/1024 [MB] (15 MBps) [2024-11-17T01:44:50.833Z] Copying: 132/1024 [MB] (16 MBps) [2024-11-17T01:44:51.405Z] Copying: 175/1024 [MB] (42 MBps) [2024-11-17T01:44:52.791Z] Copying: 196/1024 [MB] (21 MBps) [2024-11-17T01:44:53.733Z] Copying: 220/1024 [MB] (24 MBps) [2024-11-17T01:44:54.672Z] Copying: 245/1024 [MB] (25 MBps) [2024-11-17T01:44:55.613Z] Copying: 275/1024 [MB] (29 MBps) [2024-11-17T01:44:56.554Z] Copying: 305/1024 [MB] (30 MBps) [2024-11-17T01:44:57.494Z] Copying: 335/1024 [MB] (30 MBps) [2024-11-17T01:44:58.436Z] Copying: 362/1024 [MB] (26 MBps) [2024-11-17T01:44:59.832Z] Copying: 379/1024 [MB] (17 MBps) [2024-11-17T01:45:00.401Z] Copying: 398/1024 [MB] (19 MBps) [2024-11-17T01:45:01.786Z] Copying: 420/1024 [MB] (21 MBps) [2024-11-17T01:45:02.728Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-17T01:45:03.669Z] Copying: 475/1024 [MB] (29 MBps) [2024-11-17T01:45:04.610Z] Copying: 504/1024 [MB] (28 MBps) [2024-11-17T01:45:05.554Z] Copying: 537/1024 [MB] (33 MBps) [2024-11-17T01:45:06.499Z] Copying: 554/1024 [MB] (16 MBps) [2024-11-17T01:45:07.443Z] Copying: 570/1024 [MB] (15 MBps) [2024-11-17T01:45:08.830Z] Copying: 594/1024 [MB] (24 MBps) [2024-11-17T01:45:09.401Z] Copying: 610/1024 [MB] (16 MBps) [2024-11-17T01:45:10.786Z] Copying: 633/1024 [MB] (22 MBps) [2024-11-17T01:45:11.729Z] Copying: 656/1024 [MB] (23 MBps) [2024-11-17T01:45:12.672Z] Copying: 680/1024 [MB] (23 MBps) [2024-11-17T01:45:13.615Z] Copying: 711/1024 [MB] (30 MBps) [2024-11-17T01:45:14.555Z] Copying: 741/1024 [MB] (30 MBps) [2024-11-17T01:45:15.497Z] Copying: 766/1024 [MB] (24 MBps) [2024-11-17T01:45:16.440Z] Copying: 792/1024 [MB] (26 MBps) [2024-11-17T01:45:17.826Z] Copying: 812/1024 [MB] (20 MBps) [2024-11-17T01:45:18.398Z] Copying: 843/1024 [MB] (30 MBps) [2024-11-17T01:45:19.783Z] Copying: 872/1024 [MB] (28 MBps) [2024-11-17T01:45:20.725Z] Copying: 904/1024 [MB] (31 MBps) [2024-11-17T01:45:21.668Z] Copying: 941/1024 [MB] (37 MBps) [2024-11-17T01:45:22.614Z] Copying: 972/1024 [MB] (30 MBps) [2024-11-17T01:45:23.559Z] Copying: 994/1024 [MB] (22 MBps) [2024-11-17T01:45:24.562Z] Copying: 1010/1024 [MB] (16 MBps) [2024-11-17T01:45:24.828Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 01:45:24.593930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.594015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.369 [2024-11-17 01:45:24.594039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.369 [2024-11-17 01:45:24.594049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.594075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.369 [2024-11-17 01:45:24.597349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.597682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.369 [2024-11-17 01:45:24.597708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:25:16.369 [2024-11-17 01:45:24.597718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.598002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.598016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.369 [2024-11-17 01:45:24.598030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:16.369 [2024-11-17 01:45:24.598039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.612902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.612961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.369 [2024-11-17 01:45:24.612975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.842 ms 00:25:16.369 [2024-11-17 01:45:24.612984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.619207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.619250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.369 [2024-11-17 01:45:24.619261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:25:16.369 [2024-11-17 01:45:24.619277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.646524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.646574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.369 [2024-11-17 01:45:24.646587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.184 ms 00:25:16.369 [2024-11-17 01:45:24.646595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.662518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.662567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.369 [2024-11-17 01:45:24.662579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.875 ms 00:25:16.369 [2024-11-17 01:45:24.662588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.667165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.667213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.369 [2024-11-17 01:45:24.667224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.524 ms 00:25:16.369 [2024-11-17 01:45:24.667232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.693251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.693298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.369 [2024-11-17 01:45:24.693310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:25:16.369 [2024-11-17 01:45:24.693318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.718649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.718693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.369 [2024-11-17 01:45:24.718718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:25:16.369 [2024-11-17 01:45:24.718726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.369 [2024-11-17 01:45:24.743724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.369 [2024-11-17 01:45:24.743773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.369 [2024-11-17 01:45:24.743786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.951 ms 00:25:16.370 [2024-11-17 01:45:24.743810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.370 [2024-11-17 01:45:24.768851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.370 [2024-11-17 01:45:24.768901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.370 [2024-11-17 01:45:24.768913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:25:16.370 [2024-11-17 01:45:24.768922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.370 [2024-11-17 01:45:24.768969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.370 [2024-11-17 01:45:24.768986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:16.370 [2024-11-17 01:45:24.768997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:16.370 [2024-11-17 01:45:24.769006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.370 [2024-11-17 01:45:24.769592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.371 [2024-11-17 01:45:24.769847] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.371 [2024-11-17 01:45:24.769856] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8762e556-2333-475f-8181-d698776a93fd 00:25:16.371 [2024-11-17 01:45:24.769865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:16.371 [2024-11-17 01:45:24.769874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164544 00:25:16.371 [2024-11-17 01:45:24.769883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162560 00:25:16.371 [2024-11-17 01:45:24.769901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:25:16.371 [2024-11-17 01:45:24.769910] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.371 [2024-11-17 01:45:24.769922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.371 [2024-11-17 01:45:24.769930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.371 [2024-11-17 01:45:24.769947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.371 [2024-11-17 01:45:24.769955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.371 [2024-11-17 01:45:24.769962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.371 [2024-11-17 01:45:24.769972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.371 [2024-11-17 01:45:24.769981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:25:16.371 [2024-11-17 01:45:24.769990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.783537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.371 [2024-11-17 01:45:24.783736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.371 [2024-11-17 01:45:24.783754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.527 ms 00:25:16.371 [2024-11-17 01:45:24.783763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.784216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.371 [2024-11-17 01:45:24.784231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.371 [2024-11-17 01:45:24.784240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:25:16.371 [2024-11-17 01:45:24.784249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.820752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.371 [2024-11-17 01:45:24.820820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.371 [2024-11-17 01:45:24.820832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.371 [2024-11-17 01:45:24.820841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.820900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.371 [2024-11-17 01:45:24.820909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.371 [2024-11-17 01:45:24.820918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.371 [2024-11-17 01:45:24.820927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.821016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.371 [2024-11-17 01:45:24.821031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.371 [2024-11-17 01:45:24.821039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.371 [2024-11-17 01:45:24.821047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.371 [2024-11-17 01:45:24.821063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.371 [2024-11-17 01:45:24.821072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.371 [2024-11-17 01:45:24.821080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.371 [2024-11-17 01:45:24.821088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.907222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.907451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.633 [2024-11-17 01:45:24.907513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.907537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.978366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.978540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.633 [2024-11-17 01:45:24.978601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.978625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.978701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.978725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.633 [2024-11-17 01:45:24.978756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.978775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.978864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.978890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.633 [2024-11-17 01:45:24.978913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.978984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.979118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.979156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.633 [2024-11-17 01:45:24.979179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.979210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.979261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.979285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.633 [2024-11-17 01:45:24.979306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.979326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.979396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.979419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.633 [2024-11-17 01:45:24.979489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.979518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.979580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.633 [2024-11-17 01:45:24.979621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.633 [2024-11-17 01:45:24.979643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.633 [2024-11-17 01:45:24.979662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.633 [2024-11-17 01:45:24.979834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.848 ms, result 0 00:25:17.576 00:25:17.576 00:25:17.576 01:45:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.491 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:19.491 01:45:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:19.491 [2024-11-17 01:45:27.793622] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:19.491 [2024-11-17 01:45:27.793715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79621 ] 00:25:19.491 [2024-11-17 01:45:27.947491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.752 [2024-11-17 01:45:28.053427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.012 [2024-11-17 01:45:28.342939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:20.012 [2024-11-17 01:45:28.343021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:20.275 [2024-11-17 01:45:28.505615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.505678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:20.275 [2024-11-17 01:45:28.505699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:20.275 [2024-11-17 01:45:28.505708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.505764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.505776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.275 [2024-11-17 01:45:28.505805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:20.275 [2024-11-17 01:45:28.505814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.505837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:20.275 [2024-11-17 01:45:28.506550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:20.275 [2024-11-17 01:45:28.506569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.506578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.275 [2024-11-17 01:45:28.506588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:25:20.275 [2024-11-17 01:45:28.506596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.508523] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:20.275 [2024-11-17 01:45:28.523273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.523328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:20.275 [2024-11-17 01:45:28.523353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.752 ms 00:25:20.275 [2024-11-17 01:45:28.523362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.523445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.523455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:20.275 [2024-11-17 01:45:28.523465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:20.275 [2024-11-17 01:45:28.523472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.532165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.532211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.275 [2024-11-17 01:45:28.532222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.614 ms 00:25:20.275 [2024-11-17 01:45:28.532230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.532319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.532328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.275 [2024-11-17 01:45:28.532337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:20.275 [2024-11-17 01:45:28.532345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.532388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.532399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:20.275 [2024-11-17 01:45:28.532408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:20.275 [2024-11-17 01:45:28.532415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.532439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:20.275 [2024-11-17 01:45:28.536528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.536571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.275 [2024-11-17 01:45:28.536582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:25:20.275 [2024-11-17 01:45:28.536593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.536629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.275 [2024-11-17 01:45:28.536638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:20.275 [2024-11-17 01:45:28.536647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:20.275 [2024-11-17 01:45:28.536654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.275 [2024-11-17 01:45:28.536707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:20.275 [2024-11-17 01:45:28.536730] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:20.275 [2024-11-17 01:45:28.536767] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:20.275 [2024-11-17 01:45:28.536805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:20.275 [2024-11-17 01:45:28.536912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:20.275 [2024-11-17 01:45:28.536923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:20.275 [2024-11-17 01:45:28.536935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:20.275 [2024-11-17 01:45:28.536948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:20.275 [2024-11-17 01:45:28.536963] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:20.276 [2024-11-17 01:45:28.536972] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:20.276 [2024-11-17 01:45:28.536980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:20.276 [2024-11-17 01:45:28.536988] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:20.276 [2024-11-17 01:45:28.536996] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:20.276 [2024-11-17 01:45:28.537007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.276 [2024-11-17 01:45:28.537014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:20.276 [2024-11-17 01:45:28.537022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:20.276 [2024-11-17 01:45:28.537030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.276 [2024-11-17 01:45:28.537112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.276 [2024-11-17 01:45:28.537121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:20.276 [2024-11-17 01:45:28.537129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:20.276 [2024-11-17 01:45:28.537136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.276 [2024-11-17 01:45:28.537240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:20.276 [2024-11-17 01:45:28.537253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:20.276 [2024-11-17 01:45:28.537262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:20.276 [2024-11-17 01:45:28.537285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:20.276 [2024-11-17 01:45:28.537307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:20.276 [2024-11-17 01:45:28.537322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:20.276 [2024-11-17 01:45:28.537332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:20.276 [2024-11-17 01:45:28.537338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:20.276 [2024-11-17 01:45:28.537345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:20.276 [2024-11-17 01:45:28.537352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:20.276 [2024-11-17 01:45:28.537366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:20.276 [2024-11-17 01:45:28.537380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:20.276 [2024-11-17 01:45:28.537401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:20.276 [2024-11-17 01:45:28.537423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:20.276 [2024-11-17 01:45:28.537445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:20.276 [2024-11-17 01:45:28.537465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:20.276 [2024-11-17 01:45:28.537485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:20.276 [2024-11-17 01:45:28.537498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:20.276 [2024-11-17 01:45:28.537505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:20.276 [2024-11-17 01:45:28.537512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:20.276 [2024-11-17 01:45:28.537518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:20.276 [2024-11-17 01:45:28.537525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:20.276 [2024-11-17 01:45:28.537532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:20.276 [2024-11-17 01:45:28.537545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:20.276 [2024-11-17 01:45:28.537552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:20.276 [2024-11-17 01:45:28.537568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:20.276 [2024-11-17 01:45:28.537576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.276 [2024-11-17 01:45:28.537592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:20.276 [2024-11-17 01:45:28.537599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:20.276 [2024-11-17 01:45:28.537606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:20.276 [2024-11-17 01:45:28.537613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:20.276 [2024-11-17 01:45:28.537620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:20.276 [2024-11-17 01:45:28.537628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:20.276 [2024-11-17 01:45:28.537637] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:20.276 [2024-11-17 01:45:28.537648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:20.276 [2024-11-17 01:45:28.537665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:20.276 [2024-11-17 01:45:28.537673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:20.276 [2024-11-17 01:45:28.537681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:20.276 [2024-11-17 01:45:28.537688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:20.276 [2024-11-17 01:45:28.537697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:20.276 [2024-11-17 01:45:28.537704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:20.276 [2024-11-17 01:45:28.537712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:20.276 [2024-11-17 01:45:28.537718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:20.276 [2024-11-17 01:45:28.537725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:20.276 [2024-11-17 01:45:28.537763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:20.276 [2024-11-17 01:45:28.537774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:20.276 [2024-11-17 01:45:28.537803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:20.276 [2024-11-17 01:45:28.537811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:20.276 [2024-11-17 01:45:28.537819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:20.276 [2024-11-17 01:45:28.537827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.276 [2024-11-17 01:45:28.537836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:20.276 [2024-11-17 01:45:28.537846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:25:20.276 [2024-11-17 01:45:28.537854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.276 [2024-11-17 01:45:28.570330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.276 [2024-11-17 01:45:28.570542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.276 [2024-11-17 01:45:28.570563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.429 ms 00:25:20.276 [2024-11-17 01:45:28.570572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.276 [2024-11-17 01:45:28.570676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.276 [2024-11-17 01:45:28.570686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:20.277 [2024-11-17 01:45:28.570695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:20.277 [2024-11-17 01:45:28.570702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.614524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.614582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.277 [2024-11-17 01:45:28.614596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.758 ms 00:25:20.277 [2024-11-17 01:45:28.614605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.614655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.614665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.277 [2024-11-17 01:45:28.614674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:20.277 [2024-11-17 01:45:28.614687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.615376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.615409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.277 [2024-11-17 01:45:28.615421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:25:20.277 [2024-11-17 01:45:28.615429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.615592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.615603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.277 [2024-11-17 01:45:28.615612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:25:20.277 [2024-11-17 01:45:28.615626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.631372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.631414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.277 [2024-11-17 01:45:28.631429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.724 ms 00:25:20.277 [2024-11-17 01:45:28.631438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.645661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:20.277 [2024-11-17 01:45:28.645711] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:20.277 [2024-11-17 01:45:28.645725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.645733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:20.277 [2024-11-17 01:45:28.645743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.180 ms 00:25:20.277 [2024-11-17 01:45:28.645751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.671567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.671626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:20.277 [2024-11-17 01:45:28.671639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.743 ms 00:25:20.277 [2024-11-17 01:45:28.671647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.684507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.684555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:20.277 [2024-11-17 01:45:28.684568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.804 ms 00:25:20.277 [2024-11-17 01:45:28.684575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.697178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.697225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:20.277 [2024-11-17 01:45:28.697237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.554 ms 00:25:20.277 [2024-11-17 01:45:28.697244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.277 [2024-11-17 01:45:28.697920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.277 [2024-11-17 01:45:28.697947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:20.277 [2024-11-17 01:45:28.697958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:25:20.277 [2024-11-17 01:45:28.697969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.538 [2024-11-17 01:45:28.763003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.538 [2024-11-17 01:45:28.763072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:20.538 [2024-11-17 01:45:28.763095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.014 ms 00:25:20.538 [2024-11-17 01:45:28.763105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.538 [2024-11-17 01:45:28.774427] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:20.538 [2024-11-17 01:45:28.777643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.538 [2024-11-17 01:45:28.777694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:20.538 [2024-11-17 01:45:28.777706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.476 ms 00:25:20.538 [2024-11-17 01:45:28.777715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.777825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.777838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:20.539 [2024-11-17 01:45:28.777848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:20.539 [2024-11-17 01:45:28.777860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.778684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.778736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:20.539 [2024-11-17 01:45:28.778747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:25:20.539 [2024-11-17 01:45:28.778756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.778806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.778816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:20.539 [2024-11-17 01:45:28.778826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:20.539 [2024-11-17 01:45:28.778835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.778875] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:20.539 [2024-11-17 01:45:28.778889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.778898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:20.539 [2024-11-17 01:45:28.778907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:20.539 [2024-11-17 01:45:28.778915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.804807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.804858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:20.539 [2024-11-17 01:45:28.804872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.872 ms 00:25:20.539 [2024-11-17 01:45:28.804887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.804977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.539 [2024-11-17 01:45:28.804988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:20.539 [2024-11-17 01:45:28.804999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:20.539 [2024-11-17 01:45:28.805007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.539 [2024-11-17 01:45:28.806266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.129 ms, result 0 00:25:21.924  [2024-11-17T01:45:31.327Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T01:45:32.268Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-17T01:45:33.211Z] Copying: 35/1024 [MB] (10 MBps) [2024-11-17T01:45:34.154Z] Copying: 46/1024 [MB] (10 MBps) [2024-11-17T01:45:35.096Z] Copying: 62/1024 [MB] (15 MBps) [2024-11-17T01:45:36.040Z] Copying: 87/1024 [MB] (24 MBps) [2024-11-17T01:45:37.426Z] Copying: 104/1024 [MB] (17 MBps) [2024-11-17T01:45:37.998Z] Copying: 121/1024 [MB] (17 MBps) [2024-11-17T01:45:39.391Z] Copying: 141/1024 [MB] (19 MBps) [2024-11-17T01:45:40.335Z] Copying: 176/1024 [MB] (35 MBps) [2024-11-17T01:45:41.280Z] Copying: 192/1024 [MB] (16 MBps) [2024-11-17T01:45:42.224Z] Copying: 206/1024 [MB] (13 MBps) [2024-11-17T01:45:43.169Z] Copying: 218/1024 [MB] (12 MBps) [2024-11-17T01:45:44.112Z] Copying: 238/1024 [MB] (19 MBps) [2024-11-17T01:45:45.056Z] Copying: 257/1024 [MB] (19 MBps) [2024-11-17T01:45:45.999Z] Copying: 283/1024 [MB] (26 MBps) [2024-11-17T01:45:47.387Z] Copying: 322/1024 [MB] (39 MBps) [2024-11-17T01:45:48.331Z] Copying: 339/1024 [MB] (16 MBps) [2024-11-17T01:45:49.273Z] Copying: 358/1024 [MB] (18 MBps) [2024-11-17T01:45:50.214Z] Copying: 392/1024 [MB] (33 MBps) [2024-11-17T01:45:51.160Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-17T01:45:52.114Z] Copying: 439/1024 [MB] (22 MBps) [2024-11-17T01:45:53.057Z] Copying: 460/1024 [MB] (20 MBps) [2024-11-17T01:45:54.002Z] Copying: 482/1024 [MB] (22 MBps) [2024-11-17T01:45:55.390Z] Copying: 495/1024 [MB] (12 MBps) [2024-11-17T01:45:56.333Z] Copying: 509/1024 [MB] (14 MBps) [2024-11-17T01:45:57.276Z] Copying: 527/1024 [MB] (17 MBps) [2024-11-17T01:45:58.220Z] Copying: 542/1024 [MB] (14 MBps) [2024-11-17T01:45:59.256Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-17T01:46:00.200Z] Copying: 563/1024 [MB] (11 MBps) [2024-11-17T01:46:01.144Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-17T01:46:02.088Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-17T01:46:03.032Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-17T01:46:04.421Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-17T01:46:04.994Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-17T01:46:06.382Z] Copying: 641/1024 [MB] (16 MBps) [2024-11-17T01:46:07.325Z] Copying: 652/1024 [MB] (10 MBps) [2024-11-17T01:46:08.269Z] Copying: 670/1024 [MB] (18 MBps) [2024-11-17T01:46:09.215Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-17T01:46:10.157Z] Copying: 694/1024 [MB] (12 MBps) [2024-11-17T01:46:11.103Z] Copying: 705/1024 [MB] (11 MBps) [2024-11-17T01:46:12.049Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-17T01:46:12.993Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-17T01:46:14.381Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-17T01:46:15.325Z] Copying: 754/1024 [MB] (15 MBps) [2024-11-17T01:46:16.268Z] Copying: 768/1024 [MB] (14 MBps) [2024-11-17T01:46:17.213Z] Copying: 784/1024 [MB] (15 MBps) [2024-11-17T01:46:18.155Z] Copying: 809/1024 [MB] (25 MBps) [2024-11-17T01:46:19.099Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-17T01:46:20.048Z] Copying: 836/1024 [MB] (12 MBps) [2024-11-17T01:46:20.993Z] Copying: 852/1024 [MB] (15 MBps) [2024-11-17T01:46:22.380Z] Copying: 865/1024 [MB] (13 MBps) [2024-11-17T01:46:23.324Z] Copying: 875/1024 [MB] (10 MBps) [2024-11-17T01:46:24.270Z] Copying: 903/1024 [MB] (27 MBps) [2024-11-17T01:46:25.214Z] Copying: 915/1024 [MB] (12 MBps) [2024-11-17T01:46:26.160Z] Copying: 928/1024 [MB] (12 MBps) [2024-11-17T01:46:27.106Z] Copying: 942/1024 [MB] (14 MBps) [2024-11-17T01:46:28.051Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-17T01:46:28.993Z] Copying: 967/1024 [MB] (12 MBps) [2024-11-17T01:46:30.030Z] Copying: 984/1024 [MB] (17 MBps) [2024-11-17T01:46:30.973Z] Copying: 1003/1024 [MB] (19 MBps) [2024-11-17T01:46:31.235Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 01:46:31.146129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.776 [2024-11-17 01:46:31.146227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.776 [2024-11-17 01:46:31.146244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:22.776 [2024-11-17 01:46:31.146253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.776 [2024-11-17 01:46:31.146277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.776 [2024-11-17 01:46:31.149446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.776 [2024-11-17 01:46:31.149485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.776 [2024-11-17 01:46:31.149507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:26:22.776 [2024-11-17 01:46:31.149516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.776 [2024-11-17 01:46:31.149762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.776 [2024-11-17 01:46:31.149773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.776 [2024-11-17 01:46:31.149783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:26:22.776 [2024-11-17 01:46:31.149804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.776 [2024-11-17 01:46:31.153903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.776 [2024-11-17 01:46:31.153922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.776 [2024-11-17 01:46:31.153933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:26:22.776 [2024-11-17 01:46:31.153942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.776 [2024-11-17 01:46:31.161348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.776 [2024-11-17 01:46:31.161511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:22.776 [2024-11-17 01:46:31.161582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.377 ms 00:26:22.777 [2024-11-17 01:46:31.161605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.777 [2024-11-17 01:46:31.190439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.777 [2024-11-17 01:46:31.190624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.777 [2024-11-17 01:46:31.190699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.733 ms 00:26:22.777 [2024-11-17 01:46:31.190725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.777 [2024-11-17 01:46:31.207416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.777 [2024-11-17 01:46:31.207588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.777 [2024-11-17 01:46:31.207658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.536 ms 00:26:22.777 [2024-11-17 01:46:31.207681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.777 [2024-11-17 01:46:31.212555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.777 [2024-11-17 01:46:31.212717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.777 [2024-11-17 01:46:31.212780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:26:22.777 [2024-11-17 01:46:31.212822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.040 [2024-11-17 01:46:31.238800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.040 [2024-11-17 01:46:31.238969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:23.040 [2024-11-17 01:46:31.239030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.929 ms 00:26:23.040 [2024-11-17 01:46:31.239052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.040 [2024-11-17 01:46:31.264851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.040 [2024-11-17 01:46:31.265048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:23.040 [2024-11-17 01:46:31.265115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.356 ms 00:26:23.040 [2024-11-17 01:46:31.265138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.040 [2024-11-17 01:46:31.290462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.040 [2024-11-17 01:46:31.290622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:23.040 [2024-11-17 01:46:31.290682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.233 ms 00:26:23.040 [2024-11-17 01:46:31.290704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.040 [2024-11-17 01:46:31.315445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.040 [2024-11-17 01:46:31.315603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:23.040 [2024-11-17 01:46:31.315658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.526 ms 00:26:23.040 [2024-11-17 01:46:31.315679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.040 [2024-11-17 01:46:31.315826] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:23.040 [2024-11-17 01:46:31.315893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:23.040 [2024-11-17 01:46:31.315933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:23.040 [2024-11-17 01:46:31.315962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.315991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.316999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.317051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.317071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:23.040 [2024-11-17 01:46:31.317081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:23.041 [2024-11-17 01:46:31.317643] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:23.041 [2024-11-17 01:46:31.317658] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8762e556-2333-475f-8181-d698776a93fd 00:26:23.041 [2024-11-17 01:46:31.317666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:23.041 [2024-11-17 01:46:31.317673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:23.041 [2024-11-17 01:46:31.317680] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:23.041 [2024-11-17 01:46:31.317690] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:23.041 [2024-11-17 01:46:31.317698] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:23.041 [2024-11-17 01:46:31.317706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:23.041 [2024-11-17 01:46:31.317722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:23.041 [2024-11-17 01:46:31.317729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:23.041 [2024-11-17 01:46:31.317736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:23.041 [2024-11-17 01:46:31.317745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.041 [2024-11-17 01:46:31.317754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:23.041 [2024-11-17 01:46:31.317764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:26:23.041 [2024-11-17 01:46:31.317772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.041 [2024-11-17 01:46:31.331686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.041 [2024-11-17 01:46:31.331718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:23.041 [2024-11-17 01:46:31.331729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.868 ms 00:26:23.041 [2024-11-17 01:46:31.331737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.041 [2024-11-17 01:46:31.332187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.041 [2024-11-17 01:46:31.332204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:23.041 [2024-11-17 01:46:31.332221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:26:23.041 [2024-11-17 01:46:31.332229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.041 [2024-11-17 01:46:31.368678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.042 [2024-11-17 01:46:31.368717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.042 [2024-11-17 01:46:31.368729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.042 [2024-11-17 01:46:31.368738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.042 [2024-11-17 01:46:31.368825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.042 [2024-11-17 01:46:31.368836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.042 [2024-11-17 01:46:31.368851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.042 [2024-11-17 01:46:31.368861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.042 [2024-11-17 01:46:31.368954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.042 [2024-11-17 01:46:31.368965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.042 [2024-11-17 01:46:31.368975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.042 [2024-11-17 01:46:31.368985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.042 [2024-11-17 01:46:31.369003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.042 [2024-11-17 01:46:31.369013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.042 [2024-11-17 01:46:31.369028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.042 [2024-11-17 01:46:31.369040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.042 [2024-11-17 01:46:31.455266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.042 [2024-11-17 01:46:31.455478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.042 [2024-11-17 01:46:31.455501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.042 [2024-11-17 01:46:31.455511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.525948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.303 [2024-11-17 01:46:31.526168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.303 [2024-11-17 01:46:31.526274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.303 [2024-11-17 01:46:31.526360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.303 [2024-11-17 01:46:31.526495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:23.303 [2024-11-17 01:46:31.526553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.303 [2024-11-17 01:46:31.526627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.303 [2024-11-17 01:46:31.526696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.303 [2024-11-17 01:46:31.526708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.303 [2024-11-17 01:46:31.526717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.303 [2024-11-17 01:46:31.526895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.732 ms, result 0 00:26:23.877 00:26:23.877 00:26:24.138 01:46:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:26.057 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:26.057 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:26.057 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:26.057 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:26.057 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:26.318 Process with pid 77667 is not found 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77667 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77667 ']' 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77667 00:26:26.318 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77667) - No such process 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77667 is not found' 00:26:26.318 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:26.579 Remove shared memory files 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:26.579 ************************************ 00:26:26.579 END TEST ftl_dirty_shutdown 00:26:26.579 ************************************ 00:26:26.579 00:26:26.579 real 4m10.845s 00:26:26.579 user 4m44.313s 00:26:26.579 sys 0m28.190s 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.579 01:46:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:26.579 01:46:35 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:26.579 01:46:35 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:26.579 01:46:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.579 01:46:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:26.579 ************************************ 00:26:26.579 START TEST ftl_upgrade_shutdown 00:26:26.579 ************************************ 00:26:26.579 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:26.841 * Looking for test storage... 00:26:26.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.841 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.842 --rc genhtml_branch_coverage=1 00:26:26.842 --rc genhtml_function_coverage=1 00:26:26.842 --rc genhtml_legend=1 00:26:26.842 --rc geninfo_all_blocks=1 00:26:26.842 --rc geninfo_unexecuted_blocks=1 00:26:26.842 00:26:26.842 ' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.842 --rc genhtml_branch_coverage=1 00:26:26.842 --rc genhtml_function_coverage=1 00:26:26.842 --rc genhtml_legend=1 00:26:26.842 --rc geninfo_all_blocks=1 00:26:26.842 --rc geninfo_unexecuted_blocks=1 00:26:26.842 00:26:26.842 ' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.842 --rc genhtml_branch_coverage=1 00:26:26.842 --rc genhtml_function_coverage=1 00:26:26.842 --rc genhtml_legend=1 00:26:26.842 --rc geninfo_all_blocks=1 00:26:26.842 --rc geninfo_unexecuted_blocks=1 00:26:26.842 00:26:26.842 ' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.842 --rc genhtml_branch_coverage=1 00:26:26.842 --rc genhtml_function_coverage=1 00:26:26.842 --rc genhtml_legend=1 00:26:26.842 --rc geninfo_all_blocks=1 00:26:26.842 --rc geninfo_unexecuted_blocks=1 00:26:26.842 00:26:26.842 ' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80381 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80381 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80381 ']' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.842 01:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:27.104 [2024-11-17 01:46:35.301463] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:27.104 [2024-11-17 01:46:35.301848] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80381 ] 00:26:27.104 [2024-11-17 01:46:35.473266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.365 [2024-11-17 01:46:35.591832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:27.938 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:28.200 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:28.462 { 00:26:28.462 "name": "basen1", 00:26:28.462 "aliases": [ 00:26:28.462 "e66b79dc-b2a7-4bf4-8acf-6cbda893b922" 00:26:28.462 ], 00:26:28.462 "product_name": "NVMe disk", 00:26:28.462 "block_size": 4096, 00:26:28.462 "num_blocks": 1310720, 00:26:28.462 "uuid": "e66b79dc-b2a7-4bf4-8acf-6cbda893b922", 00:26:28.462 "numa_id": -1, 00:26:28.462 "assigned_rate_limits": { 00:26:28.462 "rw_ios_per_sec": 0, 00:26:28.462 "rw_mbytes_per_sec": 0, 00:26:28.462 "r_mbytes_per_sec": 0, 00:26:28.462 "w_mbytes_per_sec": 0 00:26:28.462 }, 00:26:28.462 "claimed": true, 00:26:28.462 "claim_type": "read_many_write_one", 00:26:28.462 "zoned": false, 00:26:28.462 "supported_io_types": { 00:26:28.462 "read": true, 00:26:28.462 "write": true, 00:26:28.462 "unmap": true, 00:26:28.462 "flush": true, 00:26:28.462 "reset": true, 00:26:28.462 "nvme_admin": true, 00:26:28.462 "nvme_io": true, 00:26:28.462 "nvme_io_md": false, 00:26:28.462 "write_zeroes": true, 00:26:28.462 "zcopy": false, 00:26:28.462 "get_zone_info": false, 00:26:28.462 "zone_management": false, 00:26:28.462 "zone_append": false, 00:26:28.462 "compare": true, 00:26:28.462 "compare_and_write": false, 00:26:28.462 "abort": true, 00:26:28.462 "seek_hole": false, 00:26:28.462 "seek_data": false, 00:26:28.462 "copy": true, 00:26:28.462 "nvme_iov_md": false 00:26:28.462 }, 00:26:28.462 "driver_specific": { 00:26:28.462 "nvme": [ 00:26:28.462 { 00:26:28.462 "pci_address": "0000:00:11.0", 00:26:28.462 "trid": { 00:26:28.462 "trtype": "PCIe", 00:26:28.462 "traddr": "0000:00:11.0" 00:26:28.462 }, 00:26:28.462 "ctrlr_data": { 00:26:28.462 "cntlid": 0, 00:26:28.462 "vendor_id": "0x1b36", 00:26:28.462 "model_number": "QEMU NVMe Ctrl", 00:26:28.462 "serial_number": "12341", 00:26:28.462 "firmware_revision": "8.0.0", 00:26:28.462 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:28.462 "oacs": { 00:26:28.462 "security": 0, 00:26:28.462 "format": 1, 00:26:28.462 "firmware": 0, 00:26:28.462 "ns_manage": 1 00:26:28.462 }, 00:26:28.462 "multi_ctrlr": false, 00:26:28.462 "ana_reporting": false 00:26:28.462 }, 00:26:28.462 "vs": { 00:26:28.462 "nvme_version": "1.4" 00:26:28.462 }, 00:26:28.462 "ns_data": { 00:26:28.462 "id": 1, 00:26:28.462 "can_share": false 00:26:28.462 } 00:26:28.462 } 00:26:28.462 ], 00:26:28.462 "mp_policy": "active_passive" 00:26:28.462 } 00:26:28.462 } 00:26:28.462 ]' 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:28.462 01:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:28.725 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=eec61e46-5195-4162-8784-14556badbf02 00:26:28.725 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:28.725 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eec61e46-5195-4162-8784-14556badbf02 00:26:28.986 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:29.247 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=612ae817-84a6-40bd-9b82-94fd99300c62 00:26:29.247 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 612ae817-84a6-40bd-9b82-94fd99300c62 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 ]] 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 5120 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:29.509 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 00:26:29.770 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:29.770 { 00:26:29.770 "name": "3ae45940-cfc4-47c0-8ce2-7bd851fa3c48", 00:26:29.770 "aliases": [ 00:26:29.770 "lvs/basen1p0" 00:26:29.770 ], 00:26:29.770 "product_name": "Logical Volume", 00:26:29.770 "block_size": 4096, 00:26:29.770 "num_blocks": 5242880, 00:26:29.770 "uuid": "3ae45940-cfc4-47c0-8ce2-7bd851fa3c48", 00:26:29.770 "assigned_rate_limits": { 00:26:29.770 "rw_ios_per_sec": 0, 00:26:29.770 "rw_mbytes_per_sec": 0, 00:26:29.770 "r_mbytes_per_sec": 0, 00:26:29.770 "w_mbytes_per_sec": 0 00:26:29.770 }, 00:26:29.770 "claimed": false, 00:26:29.770 "zoned": false, 00:26:29.770 "supported_io_types": { 00:26:29.770 "read": true, 00:26:29.770 "write": true, 00:26:29.770 "unmap": true, 00:26:29.770 "flush": false, 00:26:29.770 "reset": true, 00:26:29.770 "nvme_admin": false, 00:26:29.770 "nvme_io": false, 00:26:29.770 "nvme_io_md": false, 00:26:29.770 "write_zeroes": true, 00:26:29.770 "zcopy": false, 00:26:29.770 "get_zone_info": false, 00:26:29.770 "zone_management": false, 00:26:29.770 "zone_append": false, 00:26:29.770 "compare": false, 00:26:29.770 "compare_and_write": false, 00:26:29.770 "abort": false, 00:26:29.770 "seek_hole": true, 00:26:29.770 "seek_data": true, 00:26:29.770 "copy": false, 00:26:29.770 "nvme_iov_md": false 00:26:29.770 }, 00:26:29.770 "driver_specific": { 00:26:29.770 "lvol": { 00:26:29.770 "lvol_store_uuid": "612ae817-84a6-40bd-9b82-94fd99300c62", 00:26:29.770 "base_bdev": "basen1", 00:26:29.770 "thin_provision": true, 00:26:29.770 "num_allocated_clusters": 0, 00:26:29.770 "snapshot": false, 00:26:29.770 "clone": false, 00:26:29.770 "esnap_clone": false 00:26:29.770 } 00:26:29.770 } 00:26:29.770 } 00:26:29.770 ]' 00:26:29.770 01:46:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:29.770 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:30.032 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:30.032 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:30.032 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:30.297 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:30.297 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:30.297 01:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3ae45940-cfc4-47c0-8ce2-7bd851fa3c48 -c cachen1p0 --l2p_dram_limit 2 00:26:30.297 [2024-11-17 01:46:38.689501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.297 [2024-11-17 01:46:38.689537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:30.297 [2024-11-17 01:46:38.689550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:30.297 [2024-11-17 01:46:38.689556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.297 [2024-11-17 01:46:38.689601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.297 [2024-11-17 01:46:38.689608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:30.297 [2024-11-17 01:46:38.689616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:30.297 [2024-11-17 01:46:38.689622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.297 [2024-11-17 01:46:38.689638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:30.297 [2024-11-17 01:46:38.690276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:30.298 [2024-11-17 01:46:38.690299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.690305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:30.298 [2024-11-17 01:46:38.690313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.662 ms 00:26:30.298 [2024-11-17 01:46:38.690319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.298 [2024-11-17 01:46:38.690347] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4b5b3afb-4197-4401-b1a2-7bccde7143a0 00:26:30.298 [2024-11-17 01:46:38.691346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.691364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:30.298 [2024-11-17 01:46:38.691371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:30.298 [2024-11-17 01:46:38.691379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.298 [2024-11-17 01:46:38.696044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.696070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:30.298 [2024-11-17 01:46:38.696079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.628 ms 00:26:30.298 [2024-11-17 01:46:38.696087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.298 [2024-11-17 01:46:38.696150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.696159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:30.298 [2024-11-17 01:46:38.696166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:30.298 [2024-11-17 01:46:38.696174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.298 [2024-11-17 01:46:38.696207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.696216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:30.298 [2024-11-17 01:46:38.696222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:30.298 [2024-11-17 01:46:38.696233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.298 [2024-11-17 01:46:38.696250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:30.298 [2024-11-17 01:46:38.699104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.298 [2024-11-17 01:46:38.699123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:30.298 [2024-11-17 01:46:38.699133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.857 ms 00:26:30.298 [2024-11-17 01:46:38.699139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.299 [2024-11-17 01:46:38.699159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.299 [2024-11-17 01:46:38.699165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:30.299 [2024-11-17 01:46:38.699173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:30.299 [2024-11-17 01:46:38.699178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.299 [2024-11-17 01:46:38.699192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:30.299 [2024-11-17 01:46:38.699295] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:30.299 [2024-11-17 01:46:38.699315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:30.299 [2024-11-17 01:46:38.699325] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:30.299 [2024-11-17 01:46:38.699334] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:30.299 [2024-11-17 01:46:38.699341] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:30.299 [2024-11-17 01:46:38.699348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:30.299 [2024-11-17 01:46:38.699354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:30.299 [2024-11-17 01:46:38.699363] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:30.299 [2024-11-17 01:46:38.699368] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:30.299 [2024-11-17 01:46:38.699375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.299 [2024-11-17 01:46:38.699381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:30.299 [2024-11-17 01:46:38.699388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:26:30.299 [2024-11-17 01:46:38.699394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.299 [2024-11-17 01:46:38.699458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.299 [2024-11-17 01:46:38.699464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:30.299 [2024-11-17 01:46:38.699472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:30.299 [2024-11-17 01:46:38.699482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.299 [2024-11-17 01:46:38.699560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:30.299 [2024-11-17 01:46:38.699566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:30.299 [2024-11-17 01:46:38.699574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:30.299 [2024-11-17 01:46:38.699580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.299 [2024-11-17 01:46:38.699587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:30.299 [2024-11-17 01:46:38.699592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:30.299 [2024-11-17 01:46:38.699598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:30.299 [2024-11-17 01:46:38.699604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:30.299 [2024-11-17 01:46:38.699610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:30.300 [2024-11-17 01:46:38.699615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.300 [2024-11-17 01:46:38.699621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:30.300 [2024-11-17 01:46:38.699626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:30.300 [2024-11-17 01:46:38.699632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.300 [2024-11-17 01:46:38.699637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:30.300 [2024-11-17 01:46:38.699643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:30.300 [2024-11-17 01:46:38.699648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.300 [2024-11-17 01:46:38.699656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:30.300 [2024-11-17 01:46:38.699661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:30.300 [2024-11-17 01:46:38.699668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.300 [2024-11-17 01:46:38.699673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:30.300 [2024-11-17 01:46:38.699679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:30.300 [2024-11-17 01:46:38.699684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.300 [2024-11-17 01:46:38.699690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:30.300 [2024-11-17 01:46:38.699695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:30.300 [2024-11-17 01:46:38.699701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.300 [2024-11-17 01:46:38.699706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:30.300 [2024-11-17 01:46:38.699712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:30.300 [2024-11-17 01:46:38.699717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.300 [2024-11-17 01:46:38.699723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:30.300 [2024-11-17 01:46:38.699728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:30.300 [2024-11-17 01:46:38.699736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.300 [2024-11-17 01:46:38.699742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:30.300 [2024-11-17 01:46:38.699749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:30.300 [2024-11-17 01:46:38.699754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.300 [2024-11-17 01:46:38.699760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:30.300 [2024-11-17 01:46:38.699766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:30.301 [2024-11-17 01:46:38.699772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.301 [2024-11-17 01:46:38.699777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:30.301 [2024-11-17 01:46:38.699783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:30.301 [2024-11-17 01:46:38.699978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.301 [2024-11-17 01:46:38.700009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:30.301 [2024-11-17 01:46:38.700024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:30.301 [2024-11-17 01:46:38.700040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.301 [2024-11-17 01:46:38.700054] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:30.301 [2024-11-17 01:46:38.700070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:30.301 [2024-11-17 01:46:38.700085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:30.301 [2024-11-17 01:46:38.700136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.301 [2024-11-17 01:46:38.700154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:30.301 [2024-11-17 01:46:38.700171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:30.301 [2024-11-17 01:46:38.700185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:30.301 [2024-11-17 01:46:38.700201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:30.301 [2024-11-17 01:46:38.700214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:30.301 [2024-11-17 01:46:38.700230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:30.301 [2024-11-17 01:46:38.700247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:30.301 [2024-11-17 01:46:38.700301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.301 [2024-11-17 01:46:38.700447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:30.301 [2024-11-17 01:46:38.700472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:30.301 [2024-11-17 01:46:38.700493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:30.301 [2024-11-17 01:46:38.700516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:30.301 [2024-11-17 01:46:38.700537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:30.301 [2024-11-17 01:46:38.700586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:30.302 [2024-11-17 01:46:38.700608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:30.302 [2024-11-17 01:46:38.700660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:30.302 [2024-11-17 01:46:38.700889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:30.302 [2024-11-17 01:46:38.700913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.302 [2024-11-17 01:46:38.700958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:30.302 [2024-11-17 01:46:38.700998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:30.302 [2024-11-17 01:46:38.701057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:30.302 [2024-11-17 01:46:38.701098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.302 [2024-11-17 01:46:38.701116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:30.302 [2024-11-17 01:46:38.701132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.591 ms 00:26:30.302 [2024-11-17 01:46:38.701148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.302 [2024-11-17 01:46:38.701208] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:30.302 [2024-11-17 01:46:38.701324] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:33.611 [2024-11-17 01:46:41.801133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.801470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:33.612 [2024-11-17 01:46:41.801556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3099.911 ms 00:26:33.612 [2024-11-17 01:46:41.801586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.832913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.833144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:33.612 [2024-11-17 01:46:41.833284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.966 ms 00:26:33.612 [2024-11-17 01:46:41.833316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.833414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.833503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:33.612 [2024-11-17 01:46:41.833529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:33.612 [2024-11-17 01:46:41.833553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.869146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.869340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:33.612 [2024-11-17 01:46:41.869449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.187 ms 00:26:33.612 [2024-11-17 01:46:41.869477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.869524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.869555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:33.612 [2024-11-17 01:46:41.869575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:33.612 [2024-11-17 01:46:41.869644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.870251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.870400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:33.612 [2024-11-17 01:46:41.870458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:26:33.612 [2024-11-17 01:46:41.870484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.870550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.870575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:33.612 [2024-11-17 01:46:41.870598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:33.612 [2024-11-17 01:46:41.870622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.887812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.887980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:33.612 [2024-11-17 01:46:41.888047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.158 ms 00:26:33.612 [2024-11-17 01:46:41.888074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.901215] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:33.612 [2024-11-17 01:46:41.902539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.902675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:33.612 [2024-11-17 01:46:41.902731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.365 ms 00:26:33.612 [2024-11-17 01:46:41.902753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.941475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.941667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:33.612 [2024-11-17 01:46:41.941739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.656 ms 00:26:33.612 [2024-11-17 01:46:41.941764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.941967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.942097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:33.612 [2024-11-17 01:46:41.942187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:26:33.612 [2024-11-17 01:46:41.942213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.967923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.968092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:33.612 [2024-11-17 01:46:41.968158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.632 ms 00:26:33.612 [2024-11-17 01:46:41.968183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.992992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.993152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:33.612 [2024-11-17 01:46:41.993216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.749 ms 00:26:33.612 [2024-11-17 01:46:41.993227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.612 [2024-11-17 01:46:41.994168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.612 [2024-11-17 01:46:41.994221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:33.612 [2024-11-17 01:46:41.994237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:26:33.612 [2024-11-17 01:46:41.994247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.081713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.081766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:33.874 [2024-11-17 01:46:42.081786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.385 ms 00:26:33.874 [2024-11-17 01:46:42.081813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.109113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.109166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:33.874 [2024-11-17 01:46:42.109190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.202 ms 00:26:33.874 [2024-11-17 01:46:42.109199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.135030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.135074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:33.874 [2024-11-17 01:46:42.135088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.776 ms 00:26:33.874 [2024-11-17 01:46:42.135096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.161421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.161467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:33.874 [2024-11-17 01:46:42.161482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.270 ms 00:26:33.874 [2024-11-17 01:46:42.161491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.161548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.161559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:33.874 [2024-11-17 01:46:42.161575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:33.874 [2024-11-17 01:46:42.161583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.161675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.874 [2024-11-17 01:46:42.161686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:33.874 [2024-11-17 01:46:42.161701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:33.874 [2024-11-17 01:46:42.161709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.874 [2024-11-17 01:46:42.163053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3472.881 ms, result 0 00:26:33.874 { 00:26:33.874 "name": "ftl", 00:26:33.874 "uuid": "4b5b3afb-4197-4401-b1a2-7bccde7143a0" 00:26:33.874 } 00:26:33.874 01:46:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:34.136 [2024-11-17 01:46:42.333920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.136 01:46:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:34.136 01:46:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:34.397 [2024-11-17 01:46:42.730347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:34.397 01:46:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:34.658 [2024-11-17 01:46:42.926658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:34.658 01:46:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:34.919 Fill FTL, iteration 1 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:34.919 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80498 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80498 /var/tmp/spdk.tgt.sock 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80498 ']' 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:34.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.920 01:46:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:34.920 [2024-11-17 01:46:43.347600] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:34.920 [2024-11-17 01:46:43.347935] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80498 ] 00:26:35.181 [2024-11-17 01:46:43.504030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.181 [2024-11-17 01:46:43.607115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.125 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.125 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:36.125 01:46:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:36.125 ftln1 00:26:36.125 01:46:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:36.125 01:46:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80498 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80498 ']' 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80498 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80498 00:26:36.387 killing process with pid 80498 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80498' 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80498 00:26:36.387 01:46:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80498 00:26:37.774 01:46:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:37.774 01:46:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:37.774 [2024-11-17 01:46:46.200733] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:37.774 [2024-11-17 01:46:46.200869] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80545 ] 00:26:38.035 [2024-11-17 01:46:46.358548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.035 [2024-11-17 01:46:46.463715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.423  [2024-11-17T01:46:49.270Z] Copying: 180/1024 [MB] (180 MBps) [2024-11-17T01:46:49.840Z] Copying: 355/1024 [MB] (175 MBps) [2024-11-17T01:46:51.215Z] Copying: 549/1024 [MB] (194 MBps) [2024-11-17T01:46:51.782Z] Copying: 794/1024 [MB] (245 MBps) [2024-11-17T01:46:52.717Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:26:44.258 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:44.258 Calculate MD5 checksum, iteration 1 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:44.258 01:46:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.258 [2024-11-17 01:46:52.449590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:44.258 [2024-11-17 01:46:52.449679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80609 ] 00:26:44.258 [2024-11-17 01:46:52.597774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.258 [2024-11-17 01:46:52.690947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.634  [2024-11-17T01:46:54.661Z] Copying: 635/1024 [MB] (635 MBps) [2024-11-17T01:46:55.228Z] Copying: 1024/1024 [MB] (average 635 MBps) 00:26:46.769 00:26:46.769 01:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:46.769 01:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:49.311 Fill FTL, iteration 2 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b9b7a32182d68949ff62d8e784190a6d 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:49.311 01:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:49.311 [2024-11-17 01:46:57.231133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:49.311 [2024-11-17 01:46:57.231350] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80662 ] 00:26:49.311 [2024-11-17 01:46:57.380107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.311 [2024-11-17 01:46:57.469520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.688  [2024-11-17T01:47:00.080Z] Copying: 237/1024 [MB] (237 MBps) [2024-11-17T01:47:01.015Z] Copying: 475/1024 [MB] (238 MBps) [2024-11-17T01:47:02.027Z] Copying: 708/1024 [MB] (233 MBps) [2024-11-17T01:47:02.285Z] Copying: 952/1024 [MB] (244 MBps) [2024-11-17T01:47:02.856Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:26:54.397 00:26:54.397 Calculate MD5 checksum, iteration 2 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:54.397 01:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:54.397 [2024-11-17 01:47:02.767438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:54.397 [2024-11-17 01:47:02.767695] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80726 ] 00:26:54.656 [2024-11-17 01:47:02.923010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.656 [2024-11-17 01:47:03.006616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.030  [2024-11-17T01:47:05.056Z] Copying: 702/1024 [MB] (702 MBps) [2024-11-17T01:47:05.995Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:26:57.536 00:26:57.536 01:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:57.536 01:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:59.448 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:59.448 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=477af3c364e31fdd4f09f7451224061e 00:26:59.448 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:59.448 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:59.448 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:59.707 [2024-11-17 01:47:07.936774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.707 [2024-11-17 01:47:07.936817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:59.707 [2024-11-17 01:47:07.936828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:59.707 [2024-11-17 01:47:07.936834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.707 [2024-11-17 01:47:07.936853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.707 [2024-11-17 01:47:07.936860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:59.707 [2024-11-17 01:47:07.936866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:59.707 [2024-11-17 01:47:07.936874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.707 [2024-11-17 01:47:07.936889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.707 [2024-11-17 01:47:07.936896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:59.707 [2024-11-17 01:47:07.936902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:59.707 [2024-11-17 01:47:07.936908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.707 [2024-11-17 01:47:07.936955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:26:59.707 true 00:26:59.707 01:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:59.707 { 00:26:59.707 "name": "ftl", 00:26:59.707 "properties": [ 00:26:59.707 { 00:26:59.707 "name": "superblock_version", 00:26:59.707 "value": 5, 00:26:59.707 "read-only": true 00:26:59.707 }, 00:26:59.707 { 00:26:59.707 "name": "base_device", 00:26:59.707 "bands": [ 00:26:59.708 { 00:26:59.708 "id": 0, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 1, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 2, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 3, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 4, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 5, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 6, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 7, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 8, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 9, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 10, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 11, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 12, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 13, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 14, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 15, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 16, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 17, 00:26:59.708 "state": "FREE", 00:26:59.708 "validity": 0.0 00:26:59.708 } 00:26:59.708 ], 00:26:59.708 "read-only": true 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "name": "cache_device", 00:26:59.708 "type": "bdev", 00:26:59.708 "chunks": [ 00:26:59.708 { 00:26:59.708 "id": 0, 00:26:59.708 "state": "INACTIVE", 00:26:59.708 "utilization": 0.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 1, 00:26:59.708 "state": "CLOSED", 00:26:59.708 "utilization": 1.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 2, 00:26:59.708 "state": "CLOSED", 00:26:59.708 "utilization": 1.0 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 3, 00:26:59.708 "state": "OPEN", 00:26:59.708 "utilization": 0.001953125 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "id": 4, 00:26:59.708 "state": "OPEN", 00:26:59.708 "utilization": 0.0 00:26:59.708 } 00:26:59.708 ], 00:26:59.708 "read-only": true 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "name": "verbose_mode", 00:26:59.708 "value": true, 00:26:59.708 "unit": "", 00:26:59.708 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:59.708 }, 00:26:59.708 { 00:26:59.708 "name": "prep_upgrade_on_shutdown", 00:26:59.708 "value": false, 00:26:59.708 "unit": "", 00:26:59.708 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:59.708 } 00:26:59.708 ] 00:26:59.708 } 00:26:59.708 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:59.968 [2024-11-17 01:47:08.337102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.968 [2024-11-17 01:47:08.337132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:59.968 [2024-11-17 01:47:08.337140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:59.968 [2024-11-17 01:47:08.337146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.968 [2024-11-17 01:47:08.337162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.968 [2024-11-17 01:47:08.337168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:59.968 [2024-11-17 01:47:08.337174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:59.968 [2024-11-17 01:47:08.337180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.968 [2024-11-17 01:47:08.337194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.968 [2024-11-17 01:47:08.337200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:59.968 [2024-11-17 01:47:08.337206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:59.968 [2024-11-17 01:47:08.337211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.968 [2024-11-17 01:47:08.337254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.142 ms, result 0 00:26:59.968 true 00:26:59.968 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:59.968 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:59.968 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:00.227 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:00.227 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:00.227 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:00.487 [2024-11-17 01:47:08.693373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.487 [2024-11-17 01:47:08.693401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:00.487 [2024-11-17 01:47:08.693409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:00.487 [2024-11-17 01:47:08.693414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.487 [2024-11-17 01:47:08.693430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.487 [2024-11-17 01:47:08.693436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:00.487 [2024-11-17 01:47:08.693441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:00.487 [2024-11-17 01:47:08.693447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.487 [2024-11-17 01:47:08.693461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.487 [2024-11-17 01:47:08.693467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:00.487 [2024-11-17 01:47:08.693472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:00.487 [2024-11-17 01:47:08.693477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.487 [2024-11-17 01:47:08.693518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.136 ms, result 0 00:27:00.487 true 00:27:00.487 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:00.487 { 00:27:00.487 "name": "ftl", 00:27:00.487 "properties": [ 00:27:00.487 { 00:27:00.487 "name": "superblock_version", 00:27:00.487 "value": 5, 00:27:00.487 "read-only": true 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "name": "base_device", 00:27:00.487 "bands": [ 00:27:00.487 { 00:27:00.487 "id": 0, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 1, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 2, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 3, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 4, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 5, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 6, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 7, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 8, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 9, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 10, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 11, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 12, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 13, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 14, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 15, 00:27:00.487 "state": "FREE", 00:27:00.487 "validity": 0.0 00:27:00.487 }, 00:27:00.487 { 00:27:00.487 "id": 16, 00:27:00.488 "state": "FREE", 00:27:00.488 "validity": 0.0 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "id": 17, 00:27:00.488 "state": "FREE", 00:27:00.488 "validity": 0.0 00:27:00.488 } 00:27:00.488 ], 00:27:00.488 "read-only": true 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "name": "cache_device", 00:27:00.488 "type": "bdev", 00:27:00.488 "chunks": [ 00:27:00.488 { 00:27:00.488 "id": 0, 00:27:00.488 "state": "INACTIVE", 00:27:00.488 "utilization": 0.0 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "id": 1, 00:27:00.488 "state": "CLOSED", 00:27:00.488 "utilization": 1.0 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "id": 2, 00:27:00.488 "state": "CLOSED", 00:27:00.488 "utilization": 1.0 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "id": 3, 00:27:00.488 "state": "OPEN", 00:27:00.488 "utilization": 0.001953125 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "id": 4, 00:27:00.488 "state": "OPEN", 00:27:00.488 "utilization": 0.0 00:27:00.488 } 00:27:00.488 ], 00:27:00.488 "read-only": true 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "name": "verbose_mode", 00:27:00.488 "value": true, 00:27:00.488 "unit": "", 00:27:00.488 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:00.488 }, 00:27:00.488 { 00:27:00.488 "name": "prep_upgrade_on_shutdown", 00:27:00.488 "value": true, 00:27:00.488 "unit": "", 00:27:00.488 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:00.488 } 00:27:00.488 ] 00:27:00.488 } 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80381 ]] 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80381 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80381 ']' 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80381 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80381 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80381' 00:27:00.488 killing process with pid 80381 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80381 00:27:00.488 01:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80381 00:27:01.061 [2024-11-17 01:47:09.471406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:01.061 [2024-11-17 01:47:09.481065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.061 [2024-11-17 01:47:09.481100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:01.061 [2024-11-17 01:47:09.481109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:01.061 [2024-11-17 01:47:09.481116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.061 [2024-11-17 01:47:09.481133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:01.061 [2024-11-17 01:47:09.483178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.061 [2024-11-17 01:47:09.483199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:01.061 [2024-11-17 01:47:09.483207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.036 ms 00:27:01.061 [2024-11-17 01:47:09.483214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.080674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.080734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:09.197 [2024-11-17 01:47:17.080747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7597.417 ms 00:27:09.197 [2024-11-17 01:47:17.080754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.081823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.081840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:09.197 [2024-11-17 01:47:17.081849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:27:09.197 [2024-11-17 01:47:17.081856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.082710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.082726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:09.197 [2024-11-17 01:47:17.082734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:27:09.197 [2024-11-17 01:47:17.082741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.091515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.091541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:09.197 [2024-11-17 01:47:17.091549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.733 ms 00:27:09.197 [2024-11-17 01:47:17.091557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.097587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.097612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:09.197 [2024-11-17 01:47:17.097621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.004 ms 00:27:09.197 [2024-11-17 01:47:17.097628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.097691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.097700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:09.197 [2024-11-17 01:47:17.097707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:09.197 [2024-11-17 01:47:17.097718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.105633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.105655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:09.197 [2024-11-17 01:47:17.105662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.903 ms 00:27:09.197 [2024-11-17 01:47:17.105668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.113729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.113751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:09.197 [2024-11-17 01:47:17.113758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.036 ms 00:27:09.197 [2024-11-17 01:47:17.113764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.121502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.197 [2024-11-17 01:47:17.121524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:09.197 [2024-11-17 01:47:17.121530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.713 ms 00:27:09.197 [2024-11-17 01:47:17.121536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.197 [2024-11-17 01:47:17.129107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.198 [2024-11-17 01:47:17.129129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:09.198 [2024-11-17 01:47:17.129135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.521 ms 00:27:09.198 [2024-11-17 01:47:17.129141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.129165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:09.198 [2024-11-17 01:47:17.129176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.198 [2024-11-17 01:47:17.129185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.198 [2024-11-17 01:47:17.129198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:09.198 [2024-11-17 01:47:17.129205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.198 [2024-11-17 01:47:17.129296] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:09.198 [2024-11-17 01:47:17.129302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4b5b3afb-4197-4401-b1a2-7bccde7143a0 00:27:09.198 [2024-11-17 01:47:17.129308] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:09.198 [2024-11-17 01:47:17.129314] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:09.198 [2024-11-17 01:47:17.129320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:09.198 [2024-11-17 01:47:17.129326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:09.198 [2024-11-17 01:47:17.129332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:09.198 [2024-11-17 01:47:17.129339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:09.198 [2024-11-17 01:47:17.129347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:09.198 [2024-11-17 01:47:17.129352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:09.198 [2024-11-17 01:47:17.129359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:09.198 [2024-11-17 01:47:17.129366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.198 [2024-11-17 01:47:17.129372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:09.198 [2024-11-17 01:47:17.129382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:27:09.198 [2024-11-17 01:47:17.129388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.139615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.198 [2024-11-17 01:47:17.139637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:09.198 [2024-11-17 01:47:17.139645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.205 ms 00:27:09.198 [2024-11-17 01:47:17.139652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.139964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.198 [2024-11-17 01:47:17.139974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:09.198 [2024-11-17 01:47:17.139982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:27:09.198 [2024-11-17 01:47:17.139988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.175110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.175135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:09.198 [2024-11-17 01:47:17.175144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.175154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.175178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.175185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:09.198 [2024-11-17 01:47:17.175192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.175198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.175251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.175260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:09.198 [2024-11-17 01:47:17.175266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.175272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.175287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.175311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:09.198 [2024-11-17 01:47:17.175317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.175324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.238173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.238206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:09.198 [2024-11-17 01:47:17.238215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.238222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.288967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.288999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:09.198 [2024-11-17 01:47:17.289008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:09.198 [2024-11-17 01:47:17.289114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:09.198 [2024-11-17 01:47:17.289175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:09.198 [2024-11-17 01:47:17.289273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:09.198 [2024-11-17 01:47:17.289323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:09.198 [2024-11-17 01:47:17.289379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:09.198 [2024-11-17 01:47:17.289443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:09.198 [2024-11-17 01:47:17.289450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:09.198 [2024-11-17 01:47:17.289457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.198 [2024-11-17 01:47:17.289567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7808.446 ms, result 0 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80906 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80906 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80906 ']' 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.490 01:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.490 [2024-11-17 01:47:22.258210] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:14.490 [2024-11-17 01:47:22.258327] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80906 ] 00:27:14.490 [2024-11-17 01:47:22.418126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.490 [2024-11-17 01:47:22.521367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.060 [2024-11-17 01:47:23.281850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:15.060 [2024-11-17 01:47:23.281942] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:15.060 [2024-11-17 01:47:23.435992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.060 [2024-11-17 01:47:23.436056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:15.060 [2024-11-17 01:47:23.436071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:15.060 [2024-11-17 01:47:23.436080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.060 [2024-11-17 01:47:23.436143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.060 [2024-11-17 01:47:23.436154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:15.060 [2024-11-17 01:47:23.436163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:15.060 [2024-11-17 01:47:23.436171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.060 [2024-11-17 01:47:23.436198] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:15.060 [2024-11-17 01:47:23.436987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:15.060 [2024-11-17 01:47:23.437022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.060 [2024-11-17 01:47:23.437031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:15.060 [2024-11-17 01:47:23.437041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.833 ms 00:27:15.060 [2024-11-17 01:47:23.437049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.060 [2024-11-17 01:47:23.438840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:15.060 [2024-11-17 01:47:23.453130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.060 [2024-11-17 01:47:23.453187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:15.060 [2024-11-17 01:47:23.453208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.292 ms 00:27:15.060 [2024-11-17 01:47:23.453217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.060 [2024-11-17 01:47:23.453299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.060 [2024-11-17 01:47:23.453309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:15.060 [2024-11-17 01:47:23.453319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:15.061 [2024-11-17 01:47:23.453328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.461998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.462052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:15.061 [2024-11-17 01:47:23.462063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.577 ms 00:27:15.061 [2024-11-17 01:47:23.462071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.462143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.462153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:15.061 [2024-11-17 01:47:23.462162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:15.061 [2024-11-17 01:47:23.462170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.462220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.462231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:15.061 [2024-11-17 01:47:23.462244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:15.061 [2024-11-17 01:47:23.462252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.462278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:15.061 [2024-11-17 01:47:23.466301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.466346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:15.061 [2024-11-17 01:47:23.466357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.029 ms 00:27:15.061 [2024-11-17 01:47:23.466368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.466400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.466408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:15.061 [2024-11-17 01:47:23.466417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:15.061 [2024-11-17 01:47:23.466425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.466482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:15.061 [2024-11-17 01:47:23.466507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:15.061 [2024-11-17 01:47:23.466550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:15.061 [2024-11-17 01:47:23.466566] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:15.061 [2024-11-17 01:47:23.466673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:15.061 [2024-11-17 01:47:23.466684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:15.061 [2024-11-17 01:47:23.466695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:15.061 [2024-11-17 01:47:23.466705] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:15.061 [2024-11-17 01:47:23.466714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:15.061 [2024-11-17 01:47:23.466726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:15.061 [2024-11-17 01:47:23.466734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:15.061 [2024-11-17 01:47:23.466744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:15.061 [2024-11-17 01:47:23.466752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:15.061 [2024-11-17 01:47:23.466760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.466768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:15.061 [2024-11-17 01:47:23.466777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:27:15.061 [2024-11-17 01:47:23.466784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.466887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.061 [2024-11-17 01:47:23.466896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:15.061 [2024-11-17 01:47:23.466904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:15.061 [2024-11-17 01:47:23.466914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.061 [2024-11-17 01:47:23.467020] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:15.061 [2024-11-17 01:47:23.467031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:15.061 [2024-11-17 01:47:23.467040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:15.061 [2024-11-17 01:47:23.467062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:15.061 [2024-11-17 01:47:23.467077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:15.061 [2024-11-17 01:47:23.467084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:15.061 [2024-11-17 01:47:23.467091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:15.061 [2024-11-17 01:47:23.467105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:15.061 [2024-11-17 01:47:23.467112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:15.061 [2024-11-17 01:47:23.467128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:15.061 [2024-11-17 01:47:23.467135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:15.061 [2024-11-17 01:47:23.467149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:15.061 [2024-11-17 01:47:23.467156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:15.061 [2024-11-17 01:47:23.467169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:15.061 [2024-11-17 01:47:23.467189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:15.061 [2024-11-17 01:47:23.467217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:15.061 [2024-11-17 01:47:23.467237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:15.061 [2024-11-17 01:47:23.467258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:15.061 [2024-11-17 01:47:23.467281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:15.061 [2024-11-17 01:47:23.467321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:15.061 [2024-11-17 01:47:23.467340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:15.061 [2024-11-17 01:47:23.467347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467354] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:15.061 [2024-11-17 01:47:23.467362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:15.061 [2024-11-17 01:47:23.467372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.061 [2024-11-17 01:47:23.467391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:15.061 [2024-11-17 01:47:23.467399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:15.061 [2024-11-17 01:47:23.467405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:15.061 [2024-11-17 01:47:23.467413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:15.061 [2024-11-17 01:47:23.467419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:15.061 [2024-11-17 01:47:23.467427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:15.061 [2024-11-17 01:47:23.467435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:15.061 [2024-11-17 01:47:23.467444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.061 [2024-11-17 01:47:23.467453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:15.061 [2024-11-17 01:47:23.467460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:15.061 [2024-11-17 01:47:23.467469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:15.061 [2024-11-17 01:47:23.467477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:15.061 [2024-11-17 01:47:23.467484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:15.062 [2024-11-17 01:47:23.467491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:15.062 [2024-11-17 01:47:23.467498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:15.062 [2024-11-17 01:47:23.467505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:15.062 [2024-11-17 01:47:23.467556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:15.062 [2024-11-17 01:47:23.467564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.062 [2024-11-17 01:47:23.467579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:15.062 [2024-11-17 01:47:23.467587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:15.062 [2024-11-17 01:47:23.467594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:15.062 [2024-11-17 01:47:23.467601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.062 [2024-11-17 01:47:23.467609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:15.062 [2024-11-17 01:47:23.467620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:27:15.062 [2024-11-17 01:47:23.467628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.062 [2024-11-17 01:47:23.467672] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:15.062 [2024-11-17 01:47:23.467682] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:19.267 [2024-11-17 01:47:27.117595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.267 [2024-11-17 01:47:27.117689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:19.267 [2024-11-17 01:47:27.117709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3649.908 ms 00:27:19.268 [2024-11-17 01:47:27.117719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.149364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.149428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:19.268 [2024-11-17 01:47:27.149443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.371 ms 00:27:19.268 [2024-11-17 01:47:27.149452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.149551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.149569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:19.268 [2024-11-17 01:47:27.149580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:19.268 [2024-11-17 01:47:27.149588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.184642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.184697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:19.268 [2024-11-17 01:47:27.184711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.994 ms 00:27:19.268 [2024-11-17 01:47:27.184724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.184766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.184776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:19.268 [2024-11-17 01:47:27.184785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:19.268 [2024-11-17 01:47:27.184811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.185415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.185454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:19.268 [2024-11-17 01:47:27.185467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:27:19.268 [2024-11-17 01:47:27.185476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.185535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.185545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:19.268 [2024-11-17 01:47:27.185554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:19.268 [2024-11-17 01:47:27.185562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.203059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.203106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:19.268 [2024-11-17 01:47:27.203117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.475 ms 00:27:19.268 [2024-11-17 01:47:27.203125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.217133] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:19.268 [2024-11-17 01:47:27.217190] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:19.268 [2024-11-17 01:47:27.217205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.217214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:19.268 [2024-11-17 01:47:27.217224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.964 ms 00:27:19.268 [2024-11-17 01:47:27.217231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.232302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.232347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:19.268 [2024-11-17 01:47:27.232359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.017 ms 00:27:19.268 [2024-11-17 01:47:27.232369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.244905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.244966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:19.268 [2024-11-17 01:47:27.244978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.482 ms 00:27:19.268 [2024-11-17 01:47:27.244985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.257519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.257570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:19.268 [2024-11-17 01:47:27.257582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.485 ms 00:27:19.268 [2024-11-17 01:47:27.257589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.258291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.258333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:19.268 [2024-11-17 01:47:27.258343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:27:19.268 [2024-11-17 01:47:27.258352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.332369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.332447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:19.268 [2024-11-17 01:47:27.332465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.992 ms 00:27:19.268 [2024-11-17 01:47:27.332474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.343719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:19.268 [2024-11-17 01:47:27.344917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.344958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:19.268 [2024-11-17 01:47:27.344971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.371 ms 00:27:19.268 [2024-11-17 01:47:27.344980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.345086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.345101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:19.268 [2024-11-17 01:47:27.345112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:19.268 [2024-11-17 01:47:27.345122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.345183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.345194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:19.268 [2024-11-17 01:47:27.345203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:19.268 [2024-11-17 01:47:27.345212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.345235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.345244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:19.268 [2024-11-17 01:47:27.345252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:19.268 [2024-11-17 01:47:27.345263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.345306] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:19.268 [2024-11-17 01:47:27.345322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.345335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:19.268 [2024-11-17 01:47:27.345348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:19.268 [2024-11-17 01:47:27.345357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.371394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.371456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:19.268 [2024-11-17 01:47:27.371470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.013 ms 00:27:19.268 [2024-11-17 01:47:27.371478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.371569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.371581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:19.268 [2024-11-17 01:47:27.371590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:19.268 [2024-11-17 01:47:27.371598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.372951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3936.430 ms, result 0 00:27:19.268 [2024-11-17 01:47:27.387839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.268 [2024-11-17 01:47:27.403861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:19.268 [2024-11-17 01:47:27.412085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:19.268 [2024-11-17 01:47:27.652230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.652289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.268 [2024-11-17 01:47:27.652304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:19.268 [2024-11-17 01:47:27.652316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.652340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.652349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.268 [2024-11-17 01:47:27.652358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.268 [2024-11-17 01:47:27.652367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.652388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.268 [2024-11-17 01:47:27.652397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.268 [2024-11-17 01:47:27.652406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.268 [2024-11-17 01:47:27.652414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.268 [2024-11-17 01:47:27.652477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.236 ms, result 0 00:27:19.268 true 00:27:19.268 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.529 { 00:27:19.529 "name": "ftl", 00:27:19.529 "properties": [ 00:27:19.529 { 00:27:19.529 "name": "superblock_version", 00:27:19.529 "value": 5, 00:27:19.529 "read-only": true 00:27:19.529 }, 00:27:19.529 { 00:27:19.529 "name": "base_device", 00:27:19.530 "bands": [ 00:27:19.530 { 00:27:19.530 "id": 0, 00:27:19.530 "state": "CLOSED", 00:27:19.530 "validity": 1.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 1, 00:27:19.530 "state": "CLOSED", 00:27:19.530 "validity": 1.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 2, 00:27:19.530 "state": "CLOSED", 00:27:19.530 "validity": 0.007843137254901933 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 3, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 4, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 5, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 6, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 7, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 8, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 9, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 10, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 11, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 12, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 13, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 14, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 15, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 16, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 17, 00:27:19.530 "state": "FREE", 00:27:19.530 "validity": 0.0 00:27:19.530 } 00:27:19.530 ], 00:27:19.530 "read-only": true 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "name": "cache_device", 00:27:19.530 "type": "bdev", 00:27:19.530 "chunks": [ 00:27:19.530 { 00:27:19.530 "id": 0, 00:27:19.530 "state": "INACTIVE", 00:27:19.530 "utilization": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 1, 00:27:19.530 "state": "OPEN", 00:27:19.530 "utilization": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 2, 00:27:19.530 "state": "OPEN", 00:27:19.530 "utilization": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 3, 00:27:19.530 "state": "FREE", 00:27:19.530 "utilization": 0.0 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "id": 4, 00:27:19.530 "state": "FREE", 00:27:19.530 "utilization": 0.0 00:27:19.530 } 00:27:19.530 ], 00:27:19.530 "read-only": true 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "name": "verbose_mode", 00:27:19.530 "value": true, 00:27:19.530 "unit": "", 00:27:19.530 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:19.530 }, 00:27:19.530 { 00:27:19.530 "name": "prep_upgrade_on_shutdown", 00:27:19.530 "value": false, 00:27:19.530 "unit": "", 00:27:19.530 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:19.530 } 00:27:19.530 ] 00:27:19.530 } 00:27:19.530 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:19.530 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:19.530 01:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.790 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:19.791 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:19.791 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:19.791 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.791 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:20.052 Validate MD5 checksum, iteration 1 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:20.052 01:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.052 [2024-11-17 01:47:28.389379] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:20.052 [2024-11-17 01:47:28.389525] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80987 ] 00:27:20.314 [2024-11-17 01:47:28.550264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.314 [2024-11-17 01:47:28.676696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.242  [2024-11-17T01:47:31.273Z] Copying: 546/1024 [MB] (546 MBps) [2024-11-17T01:47:32.216Z] Copying: 1024/1024 [MB] (average 552 MBps) 00:27:23.757 00:27:23.757 01:47:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:23.757 01:47:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:26.304 Validate MD5 checksum, iteration 2 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9b7a32182d68949ff62d8e784190a6d 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9b7a32182d68949ff62d8e784190a6d != \b\9\b\7\a\3\2\1\8\2\d\6\8\9\4\9\f\f\6\2\d\8\e\7\8\4\1\9\0\a\6\d ]] 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:26.304 01:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.304 [2024-11-17 01:47:34.433091] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:26.304 [2024-11-17 01:47:34.433216] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81055 ] 00:27:26.304 [2024-11-17 01:47:34.594600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.304 [2024-11-17 01:47:34.690395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.216  [2024-11-17T01:47:36.675Z] Copying: 710/1024 [MB] (710 MBps) [2024-11-17T01:47:37.618Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:27:29.159 00:27:29.159 01:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:29.159 01:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.070 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=477af3c364e31fdd4f09f7451224061e 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 477af3c364e31fdd4f09f7451224061e != \4\7\7\a\f\3\c\3\6\4\e\3\1\f\d\d\4\f\0\9\f\7\4\5\1\2\2\4\0\6\1\e ]] 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80906 ]] 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80906 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81116 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81116 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81116 ']' 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.071 01:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.339 [2024-11-17 01:47:39.580155] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:31.339 [2024-11-17 01:47:39.580245] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81116 ] 00:27:31.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80906 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:31.339 [2024-11-17 01:47:39.729988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.602 [2024-11-17 01:47:39.807504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.176 [2024-11-17 01:47:40.376055] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:32.176 [2024-11-17 01:47:40.376104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:32.176 [2024-11-17 01:47:40.518999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.519036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:32.176 [2024-11-17 01:47:40.519046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:32.176 [2024-11-17 01:47:40.519053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.519091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.519099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:32.176 [2024-11-17 01:47:40.519105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:32.176 [2024-11-17 01:47:40.519111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.519129] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:32.176 [2024-11-17 01:47:40.519705] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:32.176 [2024-11-17 01:47:40.519722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.519728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:32.176 [2024-11-17 01:47:40.519735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:27:32.176 [2024-11-17 01:47:40.519740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.520031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:32.176 [2024-11-17 01:47:40.532297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.532323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:32.176 [2024-11-17 01:47:40.532332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.266 ms 00:27:32.176 [2024-11-17 01:47:40.532339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.539190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.539215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:32.176 [2024-11-17 01:47:40.539224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:32.176 [2024-11-17 01:47:40.539230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.539473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.539487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:32.176 [2024-11-17 01:47:40.539494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:27:32.176 [2024-11-17 01:47:40.539499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.539535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.539544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:32.176 [2024-11-17 01:47:40.539550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:32.176 [2024-11-17 01:47:40.539556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.539575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.539581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:32.176 [2024-11-17 01:47:40.539587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:32.176 [2024-11-17 01:47:40.539593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.539608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:32.176 [2024-11-17 01:47:40.541806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.541828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:32.176 [2024-11-17 01:47:40.541835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.202 ms 00:27:32.176 [2024-11-17 01:47:40.541841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.541862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.541868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:32.176 [2024-11-17 01:47:40.541875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:32.176 [2024-11-17 01:47:40.541880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.541897] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:32.176 [2024-11-17 01:47:40.541911] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:32.176 [2024-11-17 01:47:40.541937] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:32.176 [2024-11-17 01:47:40.541951] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:32.176 [2024-11-17 01:47:40.542029] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:32.176 [2024-11-17 01:47:40.542042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:32.176 [2024-11-17 01:47:40.542050] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:32.176 [2024-11-17 01:47:40.542057] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:32.176 [2024-11-17 01:47:40.542063] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:32.176 [2024-11-17 01:47:40.542069] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:32.176 [2024-11-17 01:47:40.542075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:32.176 [2024-11-17 01:47:40.542080] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:32.176 [2024-11-17 01:47:40.542085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:32.176 [2024-11-17 01:47:40.542091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.542099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:32.176 [2024-11-17 01:47:40.542105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:27:32.176 [2024-11-17 01:47:40.542110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.542175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.176 [2024-11-17 01:47:40.542181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:32.176 [2024-11-17 01:47:40.542186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:32.176 [2024-11-17 01:47:40.542192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.176 [2024-11-17 01:47:40.542266] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:32.176 [2024-11-17 01:47:40.542274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:32.176 [2024-11-17 01:47:40.542282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:32.176 [2024-11-17 01:47:40.542288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.176 [2024-11-17 01:47:40.542294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:32.176 [2024-11-17 01:47:40.542299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:32.176 [2024-11-17 01:47:40.542305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:32.176 [2024-11-17 01:47:40.542310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:32.176 [2024-11-17 01:47:40.542315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:32.176 [2024-11-17 01:47:40.542320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.176 [2024-11-17 01:47:40.542326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:32.176 [2024-11-17 01:47:40.542331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:32.176 [2024-11-17 01:47:40.542336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.176 [2024-11-17 01:47:40.542341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:32.176 [2024-11-17 01:47:40.542347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:32.176 [2024-11-17 01:47:40.542351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.176 [2024-11-17 01:47:40.542356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:32.176 [2024-11-17 01:47:40.542361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:32.177 [2024-11-17 01:47:40.542366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:32.177 [2024-11-17 01:47:40.542377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:32.177 [2024-11-17 01:47:40.542396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:32.177 [2024-11-17 01:47:40.542411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:32.177 [2024-11-17 01:47:40.542425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:32.177 [2024-11-17 01:47:40.542440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:32.177 [2024-11-17 01:47:40.542455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:32.177 [2024-11-17 01:47:40.542470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:32.177 [2024-11-17 01:47:40.542485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:32.177 [2024-11-17 01:47:40.542490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542495] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:32.177 [2024-11-17 01:47:40.542501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:32.177 [2024-11-17 01:47:40.542507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.177 [2024-11-17 01:47:40.542518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:32.177 [2024-11-17 01:47:40.542524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:32.177 [2024-11-17 01:47:40.542529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:32.177 [2024-11-17 01:47:40.542534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:32.177 [2024-11-17 01:47:40.542539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:32.177 [2024-11-17 01:47:40.542544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:32.177 [2024-11-17 01:47:40.542550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:32.177 [2024-11-17 01:47:40.542557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:32.177 [2024-11-17 01:47:40.542569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:32.177 [2024-11-17 01:47:40.542585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:32.177 [2024-11-17 01:47:40.542590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:32.177 [2024-11-17 01:47:40.542596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:32.177 [2024-11-17 01:47:40.542601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:32.177 [2024-11-17 01:47:40.542638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:32.177 [2024-11-17 01:47:40.542644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.177 [2024-11-17 01:47:40.542656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:32.177 [2024-11-17 01:47:40.542661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:32.177 [2024-11-17 01:47:40.542667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:32.177 [2024-11-17 01:47:40.542673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.542680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:32.177 [2024-11-17 01:47:40.542686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:27:32.177 [2024-11-17 01:47:40.542691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.561531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.561555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:32.177 [2024-11-17 01:47:40.561563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.803 ms 00:27:32.177 [2024-11-17 01:47:40.561569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.561595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.561602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:32.177 [2024-11-17 01:47:40.561608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:32.177 [2024-11-17 01:47:40.561613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.585533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.585558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:32.177 [2024-11-17 01:47:40.585565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.880 ms 00:27:32.177 [2024-11-17 01:47:40.585571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.585593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.585599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:32.177 [2024-11-17 01:47:40.585606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:32.177 [2024-11-17 01:47:40.585611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.585683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.585691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:32.177 [2024-11-17 01:47:40.585697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:32.177 [2024-11-17 01:47:40.585703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.585732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.585737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:32.177 [2024-11-17 01:47:40.585743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:32.177 [2024-11-17 01:47:40.585749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.597123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.597146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:32.177 [2024-11-17 01:47:40.597154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.359 ms 00:27:32.177 [2024-11-17 01:47:40.597160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.597233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.597242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:32.177 [2024-11-17 01:47:40.597248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:32.177 [2024-11-17 01:47:40.597253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.619964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.620006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:32.177 [2024-11-17 01:47:40.620020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.695 ms 00:27:32.177 [2024-11-17 01:47:40.620029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.177 [2024-11-17 01:47:40.630023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.177 [2024-11-17 01:47:40.630044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:32.177 [2024-11-17 01:47:40.630058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:27:32.177 [2024-11-17 01:47:40.630064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.674087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.674125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:32.439 [2024-11-17 01:47:40.674140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.980 ms 00:27:32.439 [2024-11-17 01:47:40.674146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.674256] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:32.439 [2024-11-17 01:47:40.674332] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:32.439 [2024-11-17 01:47:40.674404] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:32.439 [2024-11-17 01:47:40.674472] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:32.439 [2024-11-17 01:47:40.674496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.674502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:32.439 [2024-11-17 01:47:40.674509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:27:32.439 [2024-11-17 01:47:40.674514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.674558] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:32.439 [2024-11-17 01:47:40.674567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.674575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:32.439 [2024-11-17 01:47:40.674582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:32.439 [2024-11-17 01:47:40.674588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.686075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.686103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:32.439 [2024-11-17 01:47:40.686111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.470 ms 00:27:32.439 [2024-11-17 01:47:40.686117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.692639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.692662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:32.439 [2024-11-17 01:47:40.692670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:32.439 [2024-11-17 01:47:40.692676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.439 [2024-11-17 01:47:40.692738] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:32.439 [2024-11-17 01:47:40.692861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.439 [2024-11-17 01:47:40.692872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:32.439 [2024-11-17 01:47:40.692880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.125 ms 00:27:32.439 [2024-11-17 01:47:40.692886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.022 [2024-11-17 01:47:41.201532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.022 [2024-11-17 01:47:41.201597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:33.022 [2024-11-17 01:47:41.201612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 507.985 ms 00:27:33.022 [2024-11-17 01:47:41.201622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.022 [2024-11-17 01:47:41.205932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.022 [2024-11-17 01:47:41.205965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:33.022 [2024-11-17 01:47:41.205975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:27:33.022 [2024-11-17 01:47:41.205983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.022 [2024-11-17 01:47:41.206802] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:33.022 [2024-11-17 01:47:41.206829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.022 [2024-11-17 01:47:41.206837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:33.022 [2024-11-17 01:47:41.206847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.813 ms 00:27:33.022 [2024-11-17 01:47:41.206854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.022 [2024-11-17 01:47:41.206884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.022 [2024-11-17 01:47:41.206894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:33.022 [2024-11-17 01:47:41.206902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:33.022 [2024-11-17 01:47:41.206910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.022 [2024-11-17 01:47:41.206947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 514.205 ms, result 0 00:27:33.022 [2024-11-17 01:47:41.206983] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:33.022 [2024-11-17 01:47:41.207049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.022 [2024-11-17 01:47:41.207059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:33.022 [2024-11-17 01:47:41.207067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:33.022 [2024-11-17 01:47:41.207074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.069832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.069915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:33.660 [2024-11-17 01:47:42.069933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 861.784 ms 00:27:33.660 [2024-11-17 01:47:42.069942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.074966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.075013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:33.660 [2024-11-17 01:47:42.075025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.786 ms 00:27:33.660 [2024-11-17 01:47:42.075033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.076231] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:33.660 [2024-11-17 01:47:42.076276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.076285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:33.660 [2024-11-17 01:47:42.076294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.207 ms 00:27:33.660 [2024-11-17 01:47:42.076302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.076342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.076352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:33.660 [2024-11-17 01:47:42.076361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:33.660 [2024-11-17 01:47:42.076369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.076410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 869.415 ms, result 0 00:27:33.660 [2024-11-17 01:47:42.076457] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:33.660 [2024-11-17 01:47:42.076468] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:33.660 [2024-11-17 01:47:42.076479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.076489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:33.660 [2024-11-17 01:47:42.076499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1383.752 ms 00:27:33.660 [2024-11-17 01:47:42.076507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.076539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.076548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:33.660 [2024-11-17 01:47:42.076561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.660 [2024-11-17 01:47:42.076569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.089033] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:33.660 [2024-11-17 01:47:42.089179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.089191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:33.660 [2024-11-17 01:47:42.089202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.593 ms 00:27:33.660 [2024-11-17 01:47:42.089211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.089954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.089977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:33.660 [2024-11-17 01:47:42.089991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:27:33.660 [2024-11-17 01:47:42.089999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:33.660 [2024-11-17 01:47:42.092266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.215 ms 00:27:33.660 [2024-11-17 01:47:42.092276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:33.660 [2024-11-17 01:47:42.092344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.660 [2024-11-17 01:47:42.092354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:33.660 [2024-11-17 01:47:42.092483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:33.660 [2024-11-17 01:47:42.092491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:33.660 [2024-11-17 01:47:42.092529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:33.660 [2024-11-17 01:47:42.092537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092570] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:33.660 [2024-11-17 01:47:42.092591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:33.660 [2024-11-17 01:47:42.092608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:33.660 [2024-11-17 01:47:42.092616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.092671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.660 [2024-11-17 01:47:42.092680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:33.660 [2024-11-17 01:47:42.092688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:33.660 [2024-11-17 01:47:42.092697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.660 [2024-11-17 01:47:42.093972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1574.404 ms, result 0 00:27:33.660 [2024-11-17 01:47:42.109597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.920 [2024-11-17 01:47:42.125589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:33.920 [2024-11-17 01:47:42.134537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:33.920 Validate MD5 checksum, iteration 1 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:33.920 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:33.921 01:47:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:33.921 [2024-11-17 01:47:42.245129] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:33.921 [2024-11-17 01:47:42.245265] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81151 ] 00:27:34.180 [2024-11-17 01:47:42.400845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.180 [2024-11-17 01:47:42.522394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.092  [2024-11-17T01:47:44.811Z] Copying: 621/1024 [MB] (621 MBps) [2024-11-17T01:47:45.753Z] Copying: 1024/1024 [MB] (average 595 MBps) 00:27:37.294 00:27:37.294 01:47:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:37.294 01:47:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9b7a32182d68949ff62d8e784190a6d 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9b7a32182d68949ff62d8e784190a6d != \b\9\b\7\a\3\2\1\8\2\d\6\8\9\4\9\f\f\6\2\d\8\e\7\8\4\1\9\0\a\6\d ]] 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:39.841 Validate MD5 checksum, iteration 2 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:39.841 01:47:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:39.841 [2024-11-17 01:47:47.991330] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:39.841 [2024-11-17 01:47:47.991445] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81212 ] 00:27:39.841 [2024-11-17 01:47:48.152289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.841 [2024-11-17 01:47:48.244937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.757  [2024-11-17T01:47:50.216Z] Copying: 744/1024 [MB] (744 MBps) [2024-11-17T01:47:54.424Z] Copying: 1024/1024 [MB] (average 724 MBps) 00:27:45.965 00:27:45.965 01:47:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:45.965 01:47:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=477af3c364e31fdd4f09f7451224061e 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 477af3c364e31fdd4f09f7451224061e != \4\7\7\a\f\3\c\3\6\4\e\3\1\f\d\d\4\f\0\9\f\7\4\5\1\2\2\4\0\6\1\e ]] 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:47.881 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81116 ]] 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81116 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81116 ']' 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81116 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81116 00:27:48.142 killing process with pid 81116 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81116' 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81116 00:27:48.142 01:47:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81116 00:27:48.716 [2024-11-17 01:47:56.946497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:48.716 [2024-11-17 01:47:56.957094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.957131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:48.716 [2024-11-17 01:47:56.957142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:48.716 [2024-11-17 01:47:56.957148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.957166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:48.716 [2024-11-17 01:47:56.959243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.959270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:48.716 [2024-11-17 01:47:56.959284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.066 ms 00:27:48.716 [2024-11-17 01:47:56.959295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.959490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.959499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:48.716 [2024-11-17 01:47:56.959505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:27:48.716 [2024-11-17 01:47:56.959511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.960494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.960521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:48.716 [2024-11-17 01:47:56.960529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.971 ms 00:27:48.716 [2024-11-17 01:47:56.960535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.961417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.961436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:48.716 [2024-11-17 01:47:56.961443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:27:48.716 [2024-11-17 01:47:56.961449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.968846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.968874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:48.716 [2024-11-17 01:47:56.968882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.373 ms 00:27:48.716 [2024-11-17 01:47:56.968888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.973151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.973179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:48.716 [2024-11-17 01:47:56.973187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.232 ms 00:27:48.716 [2024-11-17 01:47:56.973194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.973251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.973259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:48.716 [2024-11-17 01:47:56.973266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:48.716 [2024-11-17 01:47:56.973272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.980238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.980263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:48.716 [2024-11-17 01:47:56.980270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.950 ms 00:27:48.716 [2024-11-17 01:47:56.980276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.987648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.987674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:48.716 [2024-11-17 01:47:56.987681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.347 ms 00:27:48.716 [2024-11-17 01:47:56.987687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:56.994913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:56.994947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:48.716 [2024-11-17 01:47:56.994954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.202 ms 00:27:48.716 [2024-11-17 01:47:56.994959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:57.002075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.716 [2024-11-17 01:47:57.002101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:48.716 [2024-11-17 01:47:57.002108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.070 ms 00:27:48.716 [2024-11-17 01:47:57.002114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.716 [2024-11-17 01:47:57.002137] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:48.716 [2024-11-17 01:47:57.002149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:48.717 [2024-11-17 01:47:57.002156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:48.717 [2024-11-17 01:47:57.002162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:48.717 [2024-11-17 01:47:57.002168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:48.717 [2024-11-17 01:47:57.002255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:48.717 [2024-11-17 01:47:57.002261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4b5b3afb-4197-4401-b1a2-7bccde7143a0 00:27:48.717 [2024-11-17 01:47:57.002267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:48.717 [2024-11-17 01:47:57.002273] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:48.717 [2024-11-17 01:47:57.002278] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:48.717 [2024-11-17 01:47:57.002283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:48.717 [2024-11-17 01:47:57.002289] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:48.717 [2024-11-17 01:47:57.002295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:48.717 [2024-11-17 01:47:57.002300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:48.717 [2024-11-17 01:47:57.002305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:48.717 [2024-11-17 01:47:57.002310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:48.717 [2024-11-17 01:47:57.002315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.717 [2024-11-17 01:47:57.002321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:48.717 [2024-11-17 01:47:57.002332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:27:48.717 [2024-11-17 01:47:57.002338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.012074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.717 [2024-11-17 01:47:57.012098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:48.717 [2024-11-17 01:47:57.012106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.715 ms 00:27:48.717 [2024-11-17 01:47:57.012112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.012384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.717 [2024-11-17 01:47:57.012396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:48.717 [2024-11-17 01:47:57.012403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:27:48.717 [2024-11-17 01:47:57.012409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.045159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.045189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:48.717 [2024-11-17 01:47:57.045196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.045202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.045227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.045233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:48.717 [2024-11-17 01:47:57.045240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.045246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.045296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.045303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:48.717 [2024-11-17 01:47:57.045310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.045316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.045330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.045338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:48.717 [2024-11-17 01:47:57.045344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.045350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.105045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.105077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:48.717 [2024-11-17 01:47:57.105085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.105090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.153775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.153815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:48.717 [2024-11-17 01:47:57.153823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.153829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.153892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.153901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:48.717 [2024-11-17 01:47:57.153907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.153912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.153946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.153953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:48.717 [2024-11-17 01:47:57.153959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.153972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.154041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.154052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:48.717 [2024-11-17 01:47:57.154058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.154063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.154090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.154103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:48.717 [2024-11-17 01:47:57.154109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.154117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.154144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.154154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:48.717 [2024-11-17 01:47:57.154159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.154165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.154197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:48.717 [2024-11-17 01:47:57.154207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:48.717 [2024-11-17 01:47:57.154213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:48.717 [2024-11-17 01:47:57.154221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.717 [2024-11-17 01:47:57.154312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.197 ms, result 0 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:49.661 Remove shared memory files 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80906 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:49.661 00:27:49.661 real 1m22.780s 00:27:49.661 user 1m53.380s 00:27:49.661 sys 0m19.669s 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.661 ************************************ 00:27:49.661 01:47:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:49.661 END TEST ftl_upgrade_shutdown 00:27:49.661 ************************************ 00:27:49.661 01:47:57 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:49.661 01:47:57 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:49.661 01:47:57 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:49.661 01:47:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.661 01:47:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:49.661 ************************************ 00:27:49.661 START TEST ftl_restore_fast 00:27:49.661 ************************************ 00:27:49.661 01:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:49.661 * Looking for test storage... 00:27:49.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.661 01:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:49.661 01:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:49.661 01:47:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:27:49.661 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:49.661 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.661 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.662 --rc genhtml_branch_coverage=1 00:27:49.662 --rc genhtml_function_coverage=1 00:27:49.662 --rc genhtml_legend=1 00:27:49.662 --rc geninfo_all_blocks=1 00:27:49.662 --rc geninfo_unexecuted_blocks=1 00:27:49.662 00:27:49.662 ' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.662 --rc genhtml_branch_coverage=1 00:27:49.662 --rc genhtml_function_coverage=1 00:27:49.662 --rc genhtml_legend=1 00:27:49.662 --rc geninfo_all_blocks=1 00:27:49.662 --rc geninfo_unexecuted_blocks=1 00:27:49.662 00:27:49.662 ' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.662 --rc genhtml_branch_coverage=1 00:27:49.662 --rc genhtml_function_coverage=1 00:27:49.662 --rc genhtml_legend=1 00:27:49.662 --rc geninfo_all_blocks=1 00:27:49.662 --rc geninfo_unexecuted_blocks=1 00:27:49.662 00:27:49.662 ' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.662 --rc genhtml_branch_coverage=1 00:27:49.662 --rc genhtml_function_coverage=1 00:27:49.662 --rc genhtml_legend=1 00:27:49.662 --rc geninfo_all_blocks=1 00:27:49.662 --rc geninfo_unexecuted_blocks=1 00:27:49.662 00:27:49.662 ' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.nnOQZ6ZkSE 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81392 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81392 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81392 ']' 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.662 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:49.923 [2024-11-17 01:47:58.124783] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:49.923 [2024-11-17 01:47:58.124918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81392 ] 00:27:49.923 [2024-11-17 01:47:58.281656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.923 [2024-11-17 01:47:58.367295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:50.868 01:47:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:50.868 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:51.131 { 00:27:51.131 "name": "nvme0n1", 00:27:51.131 "aliases": [ 00:27:51.131 "8b487063-a9b1-4e01-a432-28065cf71060" 00:27:51.131 ], 00:27:51.131 "product_name": "NVMe disk", 00:27:51.131 "block_size": 4096, 00:27:51.131 "num_blocks": 1310720, 00:27:51.131 "uuid": "8b487063-a9b1-4e01-a432-28065cf71060", 00:27:51.131 "numa_id": -1, 00:27:51.131 "assigned_rate_limits": { 00:27:51.131 "rw_ios_per_sec": 0, 00:27:51.131 "rw_mbytes_per_sec": 0, 00:27:51.131 "r_mbytes_per_sec": 0, 00:27:51.131 "w_mbytes_per_sec": 0 00:27:51.131 }, 00:27:51.131 "claimed": true, 00:27:51.131 "claim_type": "read_many_write_one", 00:27:51.131 "zoned": false, 00:27:51.131 "supported_io_types": { 00:27:51.131 "read": true, 00:27:51.131 "write": true, 00:27:51.131 "unmap": true, 00:27:51.131 "flush": true, 00:27:51.131 "reset": true, 00:27:51.131 "nvme_admin": true, 00:27:51.131 "nvme_io": true, 00:27:51.131 "nvme_io_md": false, 00:27:51.131 "write_zeroes": true, 00:27:51.131 "zcopy": false, 00:27:51.131 "get_zone_info": false, 00:27:51.131 "zone_management": false, 00:27:51.131 "zone_append": false, 00:27:51.131 "compare": true, 00:27:51.131 "compare_and_write": false, 00:27:51.131 "abort": true, 00:27:51.131 "seek_hole": false, 00:27:51.131 "seek_data": false, 00:27:51.131 "copy": true, 00:27:51.131 "nvme_iov_md": false 00:27:51.131 }, 00:27:51.131 "driver_specific": { 00:27:51.131 "nvme": [ 00:27:51.131 { 00:27:51.131 "pci_address": "0000:00:11.0", 00:27:51.131 "trid": { 00:27:51.131 "trtype": "PCIe", 00:27:51.131 "traddr": "0000:00:11.0" 00:27:51.131 }, 00:27:51.131 "ctrlr_data": { 00:27:51.131 "cntlid": 0, 00:27:51.131 "vendor_id": "0x1b36", 00:27:51.131 "model_number": "QEMU NVMe Ctrl", 00:27:51.131 "serial_number": "12341", 00:27:51.131 "firmware_revision": "8.0.0", 00:27:51.131 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:51.131 "oacs": { 00:27:51.131 "security": 0, 00:27:51.131 "format": 1, 00:27:51.131 "firmware": 0, 00:27:51.131 "ns_manage": 1 00:27:51.131 }, 00:27:51.131 "multi_ctrlr": false, 00:27:51.131 "ana_reporting": false 00:27:51.131 }, 00:27:51.131 "vs": { 00:27:51.131 "nvme_version": "1.4" 00:27:51.131 }, 00:27:51.131 "ns_data": { 00:27:51.131 "id": 1, 00:27:51.131 "can_share": false 00:27:51.131 } 00:27:51.131 } 00:27:51.131 ], 00:27:51.131 "mp_policy": "active_passive" 00:27:51.131 } 00:27:51.131 } 00:27:51.131 ]' 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:51.131 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:51.391 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=612ae817-84a6-40bd-9b82-94fd99300c62 00:27:51.391 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:51.392 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 612ae817-84a6-40bd-9b82-94fd99300c62 00:27:51.652 01:47:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:51.652 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=77320a84-e645-4271-bfde-f577f3ff9650 00:27:51.652 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 77320a84-e645-4271-bfde-f577f3ff9650 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:51.912 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:52.173 { 00:27:52.173 "name": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:52.173 "aliases": [ 00:27:52.173 "lvs/nvme0n1p0" 00:27:52.173 ], 00:27:52.173 "product_name": "Logical Volume", 00:27:52.173 "block_size": 4096, 00:27:52.173 "num_blocks": 26476544, 00:27:52.173 "uuid": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:52.173 "assigned_rate_limits": { 00:27:52.173 "rw_ios_per_sec": 0, 00:27:52.173 "rw_mbytes_per_sec": 0, 00:27:52.173 "r_mbytes_per_sec": 0, 00:27:52.173 "w_mbytes_per_sec": 0 00:27:52.173 }, 00:27:52.173 "claimed": false, 00:27:52.173 "zoned": false, 00:27:52.173 "supported_io_types": { 00:27:52.173 "read": true, 00:27:52.173 "write": true, 00:27:52.173 "unmap": true, 00:27:52.173 "flush": false, 00:27:52.173 "reset": true, 00:27:52.173 "nvme_admin": false, 00:27:52.173 "nvme_io": false, 00:27:52.173 "nvme_io_md": false, 00:27:52.173 "write_zeroes": true, 00:27:52.173 "zcopy": false, 00:27:52.173 "get_zone_info": false, 00:27:52.173 "zone_management": false, 00:27:52.173 "zone_append": false, 00:27:52.173 "compare": false, 00:27:52.173 "compare_and_write": false, 00:27:52.173 "abort": false, 00:27:52.173 "seek_hole": true, 00:27:52.173 "seek_data": true, 00:27:52.173 "copy": false, 00:27:52.173 "nvme_iov_md": false 00:27:52.173 }, 00:27:52.173 "driver_specific": { 00:27:52.173 "lvol": { 00:27:52.173 "lvol_store_uuid": "77320a84-e645-4271-bfde-f577f3ff9650", 00:27:52.173 "base_bdev": "nvme0n1", 00:27:52.173 "thin_provision": true, 00:27:52.173 "num_allocated_clusters": 0, 00:27:52.173 "snapshot": false, 00:27:52.173 "clone": false, 00:27:52.173 "esnap_clone": false 00:27:52.173 } 00:27:52.173 } 00:27:52.173 } 00:27:52.173 ]' 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:52.173 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:52.434 01:48:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:52.696 { 00:27:52.696 "name": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:52.696 "aliases": [ 00:27:52.696 "lvs/nvme0n1p0" 00:27:52.696 ], 00:27:52.696 "product_name": "Logical Volume", 00:27:52.696 "block_size": 4096, 00:27:52.696 "num_blocks": 26476544, 00:27:52.696 "uuid": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:52.696 "assigned_rate_limits": { 00:27:52.696 "rw_ios_per_sec": 0, 00:27:52.696 "rw_mbytes_per_sec": 0, 00:27:52.696 "r_mbytes_per_sec": 0, 00:27:52.696 "w_mbytes_per_sec": 0 00:27:52.696 }, 00:27:52.696 "claimed": false, 00:27:52.696 "zoned": false, 00:27:52.696 "supported_io_types": { 00:27:52.696 "read": true, 00:27:52.696 "write": true, 00:27:52.696 "unmap": true, 00:27:52.696 "flush": false, 00:27:52.696 "reset": true, 00:27:52.696 "nvme_admin": false, 00:27:52.696 "nvme_io": false, 00:27:52.696 "nvme_io_md": false, 00:27:52.696 "write_zeroes": true, 00:27:52.696 "zcopy": false, 00:27:52.696 "get_zone_info": false, 00:27:52.696 "zone_management": false, 00:27:52.696 "zone_append": false, 00:27:52.696 "compare": false, 00:27:52.696 "compare_and_write": false, 00:27:52.696 "abort": false, 00:27:52.696 "seek_hole": true, 00:27:52.696 "seek_data": true, 00:27:52.696 "copy": false, 00:27:52.696 "nvme_iov_md": false 00:27:52.696 }, 00:27:52.696 "driver_specific": { 00:27:52.696 "lvol": { 00:27:52.696 "lvol_store_uuid": "77320a84-e645-4271-bfde-f577f3ff9650", 00:27:52.696 "base_bdev": "nvme0n1", 00:27:52.696 "thin_provision": true, 00:27:52.696 "num_allocated_clusters": 0, 00:27:52.696 "snapshot": false, 00:27:52.696 "clone": false, 00:27:52.696 "esnap_clone": false 00:27:52.696 } 00:27:52.696 } 00:27:52.696 } 00:27:52.696 ]' 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:52.696 01:48:01 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:52.957 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 774ba410-b0dd-42d1-88b8-6b202e3eb52f 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:53.217 { 00:27:53.217 "name": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:53.217 "aliases": [ 00:27:53.217 "lvs/nvme0n1p0" 00:27:53.217 ], 00:27:53.217 "product_name": "Logical Volume", 00:27:53.217 "block_size": 4096, 00:27:53.217 "num_blocks": 26476544, 00:27:53.217 "uuid": "774ba410-b0dd-42d1-88b8-6b202e3eb52f", 00:27:53.217 "assigned_rate_limits": { 00:27:53.217 "rw_ios_per_sec": 0, 00:27:53.217 "rw_mbytes_per_sec": 0, 00:27:53.217 "r_mbytes_per_sec": 0, 00:27:53.217 "w_mbytes_per_sec": 0 00:27:53.217 }, 00:27:53.217 "claimed": false, 00:27:53.217 "zoned": false, 00:27:53.217 "supported_io_types": { 00:27:53.217 "read": true, 00:27:53.217 "write": true, 00:27:53.217 "unmap": true, 00:27:53.217 "flush": false, 00:27:53.217 "reset": true, 00:27:53.217 "nvme_admin": false, 00:27:53.217 "nvme_io": false, 00:27:53.217 "nvme_io_md": false, 00:27:53.217 "write_zeroes": true, 00:27:53.217 "zcopy": false, 00:27:53.217 "get_zone_info": false, 00:27:53.217 "zone_management": false, 00:27:53.217 "zone_append": false, 00:27:53.217 "compare": false, 00:27:53.217 "compare_and_write": false, 00:27:53.217 "abort": false, 00:27:53.217 "seek_hole": true, 00:27:53.217 "seek_data": true, 00:27:53.217 "copy": false, 00:27:53.217 "nvme_iov_md": false 00:27:53.217 }, 00:27:53.217 "driver_specific": { 00:27:53.217 "lvol": { 00:27:53.217 "lvol_store_uuid": "77320a84-e645-4271-bfde-f577f3ff9650", 00:27:53.217 "base_bdev": "nvme0n1", 00:27:53.217 "thin_provision": true, 00:27:53.217 "num_allocated_clusters": 0, 00:27:53.217 "snapshot": false, 00:27:53.217 "clone": false, 00:27:53.217 "esnap_clone": false 00:27:53.217 } 00:27:53.217 } 00:27:53.217 } 00:27:53.217 ]' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 774ba410-b0dd-42d1-88b8-6b202e3eb52f --l2p_dram_limit 10' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:53.217 01:48:01 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 774ba410-b0dd-42d1-88b8-6b202e3eb52f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:53.478 [2024-11-17 01:48:01.760373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.760414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:53.478 [2024-11-17 01:48:01.760426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:53.478 [2024-11-17 01:48:01.760433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.760474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.760481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.478 [2024-11-17 01:48:01.760489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:53.478 [2024-11-17 01:48:01.760495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.760513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:53.478 [2024-11-17 01:48:01.761101] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:53.478 [2024-11-17 01:48:01.761123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.761129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.478 [2024-11-17 01:48:01.761137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:27:53.478 [2024-11-17 01:48:01.761144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.761171] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2726e5b2-d172-4a51-bda9-409ff1009ced 00:27:53.478 [2024-11-17 01:48:01.762123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.762148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:53.478 [2024-11-17 01:48:01.762156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:53.478 [2024-11-17 01:48:01.762166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.766843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.766874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.478 [2024-11-17 01:48:01.766883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:27:53.478 [2024-11-17 01:48:01.766890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.766956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.766965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.478 [2024-11-17 01:48:01.766971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:53.478 [2024-11-17 01:48:01.766981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.767009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.767017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:53.478 [2024-11-17 01:48:01.767023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:53.478 [2024-11-17 01:48:01.767031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.767048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:53.478 [2024-11-17 01:48:01.769914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.769943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.478 [2024-11-17 01:48:01.769952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:27:53.478 [2024-11-17 01:48:01.769958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.769987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.478 [2024-11-17 01:48:01.769994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:53.478 [2024-11-17 01:48:01.770001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:53.478 [2024-11-17 01:48:01.770007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.478 [2024-11-17 01:48:01.770020] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:53.478 [2024-11-17 01:48:01.770122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:53.478 [2024-11-17 01:48:01.770142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:53.478 [2024-11-17 01:48:01.770151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:53.478 [2024-11-17 01:48:01.770160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:53.478 [2024-11-17 01:48:01.770167] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:53.479 [2024-11-17 01:48:01.770180] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:53.479 [2024-11-17 01:48:01.770189] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:53.479 [2024-11-17 01:48:01.770194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:53.479 [2024-11-17 01:48:01.770201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.479 [2024-11-17 01:48:01.770207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:53.479 [2024-11-17 01:48:01.770215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:27:53.479 [2024-11-17 01:48:01.770227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.479 [2024-11-17 01:48:01.770291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.479 [2024-11-17 01:48:01.770303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:53.479 [2024-11-17 01:48:01.770311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:53.479 [2024-11-17 01:48:01.770317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.479 [2024-11-17 01:48:01.770394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:53.479 [2024-11-17 01:48:01.770402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:53.479 [2024-11-17 01:48:01.770409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:53.479 [2024-11-17 01:48:01.770427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:53.479 [2024-11-17 01:48:01.770446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.479 [2024-11-17 01:48:01.770458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:53.479 [2024-11-17 01:48:01.770463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:53.479 [2024-11-17 01:48:01.770469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.479 [2024-11-17 01:48:01.770474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:53.479 [2024-11-17 01:48:01.770481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:53.479 [2024-11-17 01:48:01.770487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:53.479 [2024-11-17 01:48:01.770501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:53.479 [2024-11-17 01:48:01.770520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:53.479 [2024-11-17 01:48:01.770536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:53.479 [2024-11-17 01:48:01.770554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:53.479 [2024-11-17 01:48:01.770570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:53.479 [2024-11-17 01:48:01.770589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.479 [2024-11-17 01:48:01.770600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:53.479 [2024-11-17 01:48:01.770605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:53.479 [2024-11-17 01:48:01.770610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.479 [2024-11-17 01:48:01.770615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:53.479 [2024-11-17 01:48:01.770622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:53.479 [2024-11-17 01:48:01.770626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:53.479 [2024-11-17 01:48:01.770638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:53.479 [2024-11-17 01:48:01.770644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770648] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:53.479 [2024-11-17 01:48:01.770657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:53.479 [2024-11-17 01:48:01.770663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.479 [2024-11-17 01:48:01.770677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:53.479 [2024-11-17 01:48:01.770685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:53.479 [2024-11-17 01:48:01.770690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:53.479 [2024-11-17 01:48:01.770697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:53.479 [2024-11-17 01:48:01.770702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:53.479 [2024-11-17 01:48:01.770708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:53.479 [2024-11-17 01:48:01.770715] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:53.479 [2024-11-17 01:48:01.770724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:53.479 [2024-11-17 01:48:01.770739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:53.479 [2024-11-17 01:48:01.770745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:53.479 [2024-11-17 01:48:01.770751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:53.479 [2024-11-17 01:48:01.770757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:53.479 [2024-11-17 01:48:01.770763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:53.479 [2024-11-17 01:48:01.770768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:53.479 [2024-11-17 01:48:01.770775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:53.479 [2024-11-17 01:48:01.770781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:53.479 [2024-11-17 01:48:01.770804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:53.479 [2024-11-17 01:48:01.770836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:53.479 [2024-11-17 01:48:01.770843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:53.479 [2024-11-17 01:48:01.770856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:53.479 [2024-11-17 01:48:01.770862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:53.479 [2024-11-17 01:48:01.770868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:53.479 [2024-11-17 01:48:01.770874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.479 [2024-11-17 01:48:01.770881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:53.479 [2024-11-17 01:48:01.770887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:27:53.479 [2024-11-17 01:48:01.770894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.479 [2024-11-17 01:48:01.770933] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:53.479 [2024-11-17 01:48:01.770944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:56.786 [2024-11-17 01:48:04.951166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:04.951243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:56.786 [2024-11-17 01:48:04.951258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3180.221 ms 00:27:56.786 [2024-11-17 01:48:04.951277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:04.978943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:04.979000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.786 [2024-11-17 01:48:04.979013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.452 ms 00:27:56.786 [2024-11-17 01:48:04.979025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:04.979148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:04.979161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.786 [2024-11-17 01:48:04.979170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:56.786 [2024-11-17 01:48:04.979181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.012572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.012631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.786 [2024-11-17 01:48:05.012643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.356 ms 00:27:56.786 [2024-11-17 01:48:05.012654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.012687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.012702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.786 [2024-11-17 01:48:05.012711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:56.786 [2024-11-17 01:48:05.012721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.013316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.013354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.786 [2024-11-17 01:48:05.013366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:27:56.786 [2024-11-17 01:48:05.013377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.013490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.013508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.786 [2024-11-17 01:48:05.013521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:27:56.786 [2024-11-17 01:48:05.013534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.030646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.030702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.786 [2024-11-17 01:48:05.030714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.093 ms 00:27:56.786 [2024-11-17 01:48:05.030724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.043671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:56.786 [2024-11-17 01:48:05.047404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.047449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:56.786 [2024-11-17 01:48:05.047462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.580 ms 00:27:56.786 [2024-11-17 01:48:05.047470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.159351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.159416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:56.786 [2024-11-17 01:48:05.159435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.846 ms 00:27:56.786 [2024-11-17 01:48:05.159444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.159651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.159667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:56.786 [2024-11-17 01:48:05.159682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:27:56.786 [2024-11-17 01:48:05.159690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.184825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.184878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:56.786 [2024-11-17 01:48:05.184894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.078 ms 00:27:56.786 [2024-11-17 01:48:05.184902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.209330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.209392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:56.786 [2024-11-17 01:48:05.209408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.370 ms 00:27:56.786 [2024-11-17 01:48:05.209415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.786 [2024-11-17 01:48:05.210049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.786 [2024-11-17 01:48:05.210069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.786 [2024-11-17 01:48:05.210083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:27:56.786 [2024-11-17 01:48:05.210091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.048 [2024-11-17 01:48:05.295491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.295546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:57.049 [2024-11-17 01:48:05.295566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.349 ms 00:27:57.049 [2024-11-17 01:48:05.295575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.322498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.322550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:57.049 [2024-11-17 01:48:05.322565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.846 ms 00:27:57.049 [2024-11-17 01:48:05.322573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.348056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.348104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:57.049 [2024-11-17 01:48:05.348118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.451 ms 00:27:57.049 [2024-11-17 01:48:05.348125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.374148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.374201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.049 [2024-11-17 01:48:05.374216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.990 ms 00:27:57.049 [2024-11-17 01:48:05.374224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.374258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.374267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.049 [2024-11-17 01:48:05.374282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.049 [2024-11-17 01:48:05.374290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.374382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.049 [2024-11-17 01:48:05.374393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.049 [2024-11-17 01:48:05.374407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:57.049 [2024-11-17 01:48:05.374415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.049 [2024-11-17 01:48:05.375602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3614.710 ms, result 0 00:27:57.049 { 00:27:57.049 "name": "ftl0", 00:27:57.049 "uuid": "2726e5b2-d172-4a51-bda9-409ff1009ced" 00:27:57.049 } 00:27:57.049 01:48:05 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:57.049 01:48:05 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:57.310 01:48:05 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:57.310 01:48:05 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:57.573 [2024-11-17 01:48:05.814965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.815025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:57.573 [2024-11-17 01:48:05.815037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:57.573 [2024-11-17 01:48:05.815055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.815080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.573 [2024-11-17 01:48:05.818059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.818100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:57.573 [2024-11-17 01:48:05.818113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:27:57.573 [2024-11-17 01:48:05.818121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.818391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.818402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:57.573 [2024-11-17 01:48:05.818416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:27:57.573 [2024-11-17 01:48:05.818424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.821678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.821702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:57.573 [2024-11-17 01:48:05.821714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:27:57.573 [2024-11-17 01:48:05.821722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.827979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.828020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:57.573 [2024-11-17 01:48:05.828038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:27:57.573 [2024-11-17 01:48:05.828046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.853357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.853409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:57.573 [2024-11-17 01:48:05.853423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:27:57.573 [2024-11-17 01:48:05.853431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.871296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.871347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:57.573 [2024-11-17 01:48:05.871362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.812 ms 00:27:57.573 [2024-11-17 01:48:05.871370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.871535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.871548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:57.573 [2024-11-17 01:48:05.871559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:27:57.573 [2024-11-17 01:48:05.871567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.896890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.896939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:57.573 [2024-11-17 01:48:05.896953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.302 ms 00:27:57.573 [2024-11-17 01:48:05.896960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.922074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.922120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:57.573 [2024-11-17 01:48:05.922133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.065 ms 00:27:57.573 [2024-11-17 01:48:05.922140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.946950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.947001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:57.573 [2024-11-17 01:48:05.947014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.756 ms 00:27:57.573 [2024-11-17 01:48:05.947021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.971561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.573 [2024-11-17 01:48:05.971611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:57.573 [2024-11-17 01:48:05.971625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.446 ms 00:27:57.573 [2024-11-17 01:48:05.971631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.573 [2024-11-17 01:48:05.971678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:57.573 [2024-11-17 01:48:05.971694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:57.573 [2024-11-17 01:48:05.971810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.971994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:57.574 [2024-11-17 01:48:05.972458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:57.575 [2024-11-17 01:48:05.972629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:57.575 [2024-11-17 01:48:05.972642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2726e5b2-d172-4a51-bda9-409ff1009ced 00:27:57.575 [2024-11-17 01:48:05.972650] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:57.575 [2024-11-17 01:48:05.972662] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:57.575 [2024-11-17 01:48:05.972669] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:57.575 [2024-11-17 01:48:05.972682] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:57.575 [2024-11-17 01:48:05.972689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:57.575 [2024-11-17 01:48:05.972698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:57.575 [2024-11-17 01:48:05.972706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:57.575 [2024-11-17 01:48:05.972714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:57.575 [2024-11-17 01:48:05.972720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:57.575 [2024-11-17 01:48:05.972730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.575 [2024-11-17 01:48:05.972738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:57.575 [2024-11-17 01:48:05.972749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:27:57.575 [2024-11-17 01:48:05.972756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.575 [2024-11-17 01:48:05.986362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.575 [2024-11-17 01:48:05.986402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:57.575 [2024-11-17 01:48:05.986416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.548 ms 00:27:57.575 [2024-11-17 01:48:05.986424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.575 [2024-11-17 01:48:05.986842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.575 [2024-11-17 01:48:05.986863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:57.575 [2024-11-17 01:48:05.986875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:27:57.575 [2024-11-17 01:48:05.986888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.033052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.033105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.837 [2024-11-17 01:48:06.033120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.033128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.033198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.033207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.837 [2024-11-17 01:48:06.033217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.033229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.033326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.033338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.837 [2024-11-17 01:48:06.033348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.033356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.033379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.033388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.837 [2024-11-17 01:48:06.033398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.033406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.117397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.117455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.837 [2024-11-17 01:48:06.117471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.117479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.186562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.186623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.837 [2024-11-17 01:48:06.186638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.186651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.186738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.186749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.837 [2024-11-17 01:48:06.186760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.186768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.186864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.186877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.837 [2024-11-17 01:48:06.186888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.186896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.187006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.187016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.837 [2024-11-17 01:48:06.187027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.187035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.187073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.187083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:57.837 [2024-11-17 01:48:06.187094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.187103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.187150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.187162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.837 [2024-11-17 01:48:06.187173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.187181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.187234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.837 [2024-11-17 01:48:06.187244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.837 [2024-11-17 01:48:06.187255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.837 [2024-11-17 01:48:06.187263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.837 [2024-11-17 01:48:06.187431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.428 ms, result 0 00:27:57.837 true 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81392 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81392 ']' 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81392 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81392 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.837 killing process with pid 81392 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81392' 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81392 00:27:57.837 01:48:06 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81392 00:28:04.420 01:48:12 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:08.625 262144+0 records in 00:28:08.625 262144+0 records out 00:28:08.625 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01493 s, 267 MB/s 00:28:08.625 01:48:16 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:10.007 01:48:18 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:10.007 [2024-11-17 01:48:18.414301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:10.007 [2024-11-17 01:48:18.414423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81617 ] 00:28:10.268 [2024-11-17 01:48:18.559696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.268 [2024-11-17 01:48:18.635930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.528 [2024-11-17 01:48:18.839891] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:10.528 [2024-11-17 01:48:18.839933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:10.790 [2024-11-17 01:48:18.992046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.790 [2024-11-17 01:48:18.992082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:10.790 [2024-11-17 01:48:18.992095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:10.790 [2024-11-17 01:48:18.992102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:18.992137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:18.992145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.791 [2024-11-17 01:48:18.992154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:10.791 [2024-11-17 01:48:18.992159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:18.992172] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:10.791 [2024-11-17 01:48:18.992716] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:10.791 [2024-11-17 01:48:18.992734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:18.992741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.791 [2024-11-17 01:48:18.992747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:28:10.791 [2024-11-17 01:48:18.992752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:18.993848] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:10.791 [2024-11-17 01:48:19.003647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.003677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:10.791 [2024-11-17 01:48:19.003691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.800 ms 00:28:10.791 [2024-11-17 01:48:19.003697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.003737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.003744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:10.791 [2024-11-17 01:48:19.003750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:10.791 [2024-11-17 01:48:19.003756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.008070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.008094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.791 [2024-11-17 01:48:19.008101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:28:10.791 [2024-11-17 01:48:19.008107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.008163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.008170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.791 [2024-11-17 01:48:19.008176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:10.791 [2024-11-17 01:48:19.008181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.008211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.008218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:10.791 [2024-11-17 01:48:19.008225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:10.791 [2024-11-17 01:48:19.008230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.008243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:10.791 [2024-11-17 01:48:19.010870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.010895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.791 [2024-11-17 01:48:19.010902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:28:10.791 [2024-11-17 01:48:19.010910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.010936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.010942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:10.791 [2024-11-17 01:48:19.010949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:10.791 [2024-11-17 01:48:19.010954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.010968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:10.791 [2024-11-17 01:48:19.010983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:10.791 [2024-11-17 01:48:19.011009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:10.791 [2024-11-17 01:48:19.011021] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:10.791 [2024-11-17 01:48:19.011099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:10.791 [2024-11-17 01:48:19.011107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:10.791 [2024-11-17 01:48:19.011115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:10.791 [2024-11-17 01:48:19.011122] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:10.791 [2024-11-17 01:48:19.011140] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:10.791 [2024-11-17 01:48:19.011145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:10.791 [2024-11-17 01:48:19.011151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:10.791 [2024-11-17 01:48:19.011159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.011164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:10.791 [2024-11-17 01:48:19.011170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:28:10.791 [2024-11-17 01:48:19.011176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.011238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.791 [2024-11-17 01:48:19.011245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:10.791 [2024-11-17 01:48:19.011251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:10.791 [2024-11-17 01:48:19.011256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.791 [2024-11-17 01:48:19.011347] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:10.791 [2024-11-17 01:48:19.011357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:10.791 [2024-11-17 01:48:19.011363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:10.791 [2024-11-17 01:48:19.011380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:10.791 [2024-11-17 01:48:19.011396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:10.791 [2024-11-17 01:48:19.011405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:10.791 [2024-11-17 01:48:19.011410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:10.791 [2024-11-17 01:48:19.011416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:10.791 [2024-11-17 01:48:19.011421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:10.791 [2024-11-17 01:48:19.011426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:10.791 [2024-11-17 01:48:19.011436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:10.791 [2024-11-17 01:48:19.011446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:10.791 [2024-11-17 01:48:19.011462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:10.791 [2024-11-17 01:48:19.011476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:10.791 [2024-11-17 01:48:19.011491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:10.791 [2024-11-17 01:48:19.011505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.791 [2024-11-17 01:48:19.011515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:10.791 [2024-11-17 01:48:19.011520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:10.791 [2024-11-17 01:48:19.011530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:10.791 [2024-11-17 01:48:19.011535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:10.791 [2024-11-17 01:48:19.011540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:10.791 [2024-11-17 01:48:19.011545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:10.791 [2024-11-17 01:48:19.011550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:10.791 [2024-11-17 01:48:19.011554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.791 [2024-11-17 01:48:19.011559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:10.791 [2024-11-17 01:48:19.011568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:10.791 [2024-11-17 01:48:19.011573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.792 [2024-11-17 01:48:19.011578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:10.792 [2024-11-17 01:48:19.011583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:10.792 [2024-11-17 01:48:19.011591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:10.792 [2024-11-17 01:48:19.011598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.792 [2024-11-17 01:48:19.011606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:10.792 [2024-11-17 01:48:19.011612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:10.792 [2024-11-17 01:48:19.011619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:10.792 [2024-11-17 01:48:19.011625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:10.792 [2024-11-17 01:48:19.011632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:10.792 [2024-11-17 01:48:19.011637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:10.792 [2024-11-17 01:48:19.011647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:10.792 [2024-11-17 01:48:19.011654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:10.792 [2024-11-17 01:48:19.011668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:10.792 [2024-11-17 01:48:19.011674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:10.792 [2024-11-17 01:48:19.011679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:10.792 [2024-11-17 01:48:19.011688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:10.792 [2024-11-17 01:48:19.011693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:10.792 [2024-11-17 01:48:19.011701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:10.792 [2024-11-17 01:48:19.011707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:10.792 [2024-11-17 01:48:19.011713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:10.792 [2024-11-17 01:48:19.011722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:10.792 [2024-11-17 01:48:19.011748] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:10.792 [2024-11-17 01:48:19.011756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:10.792 [2024-11-17 01:48:19.011767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:10.792 [2024-11-17 01:48:19.011772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:10.792 [2024-11-17 01:48:19.011778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:10.792 [2024-11-17 01:48:19.011784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.011799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:10.792 [2024-11-17 01:48:19.011805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:28:10.792 [2024-11-17 01:48:19.011811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.032592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.032622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.792 [2024-11-17 01:48:19.032630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.748 ms 00:28:10.792 [2024-11-17 01:48:19.032636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.032701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.032708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:10.792 [2024-11-17 01:48:19.032714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:10.792 [2024-11-17 01:48:19.032720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.075247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.075287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.792 [2024-11-17 01:48:19.075297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.489 ms 00:28:10.792 [2024-11-17 01:48:19.075303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.075333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.075340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.792 [2024-11-17 01:48:19.075347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:10.792 [2024-11-17 01:48:19.075355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.075663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.075684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.792 [2024-11-17 01:48:19.075691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:10.792 [2024-11-17 01:48:19.075697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.075804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.075817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.792 [2024-11-17 01:48:19.075824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:10.792 [2024-11-17 01:48:19.075830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.086196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.086222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.792 [2024-11-17 01:48:19.086229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.348 ms 00:28:10.792 [2024-11-17 01:48:19.086237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.096363] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:10.792 [2024-11-17 01:48:19.096392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:10.792 [2024-11-17 01:48:19.096400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.096407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:10.792 [2024-11-17 01:48:19.096413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.086 ms 00:28:10.792 [2024-11-17 01:48:19.096419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.114774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.114807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:10.792 [2024-11-17 01:48:19.114819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.327 ms 00:28:10.792 [2024-11-17 01:48:19.114825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.123803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.123836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:10.792 [2024-11-17 01:48:19.123843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.950 ms 00:28:10.792 [2024-11-17 01:48:19.123848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.132691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.132717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:10.792 [2024-11-17 01:48:19.132724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.818 ms 00:28:10.792 [2024-11-17 01:48:19.132730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.133182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.133202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:10.792 [2024-11-17 01:48:19.133209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:28:10.792 [2024-11-17 01:48:19.133216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.176896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.176933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:10.792 [2024-11-17 01:48:19.176944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.667 ms 00:28:10.792 [2024-11-17 01:48:19.176955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.184864] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:10.792 [2024-11-17 01:48:19.186546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.186570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:10.792 [2024-11-17 01:48:19.186577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.555 ms 00:28:10.792 [2024-11-17 01:48:19.186583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.186634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.186643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:10.792 [2024-11-17 01:48:19.186650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:10.792 [2024-11-17 01:48:19.186656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.792 [2024-11-17 01:48:19.186714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.792 [2024-11-17 01:48:19.186722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:10.792 [2024-11-17 01:48:19.186729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:10.793 [2024-11-17 01:48:19.186735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.793 [2024-11-17 01:48:19.186749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.793 [2024-11-17 01:48:19.186755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:10.793 [2024-11-17 01:48:19.186761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:10.793 [2024-11-17 01:48:19.186767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.793 [2024-11-17 01:48:19.186801] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:10.793 [2024-11-17 01:48:19.186810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.793 [2024-11-17 01:48:19.186818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:10.793 [2024-11-17 01:48:19.186824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:10.793 [2024-11-17 01:48:19.186829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.793 [2024-11-17 01:48:19.204818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.793 [2024-11-17 01:48:19.204847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:10.793 [2024-11-17 01:48:19.204855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:28:10.793 [2024-11-17 01:48:19.204861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.793 [2024-11-17 01:48:19.204916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.793 [2024-11-17 01:48:19.204923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:10.793 [2024-11-17 01:48:19.204929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:10.793 [2024-11-17 01:48:19.204935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.793 [2024-11-17 01:48:19.205726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.359 ms, result 0 00:28:12.177  [2024-11-17T01:48:21.224Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T01:48:22.614Z] Copying: 41/1024 [MB] (19 MBps) [2024-11-17T01:48:23.558Z] Copying: 57/1024 [MB] (16 MBps) [2024-11-17T01:48:24.501Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-17T01:48:25.445Z] Copying: 85/1024 [MB] (16 MBps) [2024-11-17T01:48:26.388Z] Copying: 97/1024 [MB] (11 MBps) [2024-11-17T01:48:27.330Z] Copying: 111/1024 [MB] (13 MBps) [2024-11-17T01:48:28.274Z] Copying: 133/1024 [MB] (22 MBps) [2024-11-17T01:48:29.220Z] Copying: 158/1024 [MB] (24 MBps) [2024-11-17T01:48:30.605Z] Copying: 178/1024 [MB] (20 MBps) [2024-11-17T01:48:31.549Z] Copying: 197/1024 [MB] (19 MBps) [2024-11-17T01:48:32.494Z] Copying: 234/1024 [MB] (37 MBps) [2024-11-17T01:48:33.440Z] Copying: 255/1024 [MB] (21 MBps) [2024-11-17T01:48:34.384Z] Copying: 268/1024 [MB] (12 MBps) [2024-11-17T01:48:35.329Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-17T01:48:36.271Z] Copying: 294/1024 [MB] (14 MBps) [2024-11-17T01:48:37.659Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-17T01:48:38.231Z] Copying: 322504/1048576 [kB] (10136 kBps) [2024-11-17T01:48:39.620Z] Copying: 327/1024 [MB] (12 MBps) [2024-11-17T01:48:40.561Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-17T01:48:41.506Z] Copying: 347/1024 [MB] (10 MBps) [2024-11-17T01:48:42.449Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-17T01:48:43.394Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-17T01:48:44.338Z] Copying: 385/1024 [MB] (15 MBps) [2024-11-17T01:48:45.283Z] Copying: 406/1024 [MB] (21 MBps) [2024-11-17T01:48:46.226Z] Copying: 426/1024 [MB] (20 MBps) [2024-11-17T01:48:47.613Z] Copying: 438/1024 [MB] (12 MBps) [2024-11-17T01:48:48.558Z] Copying: 455/1024 [MB] (16 MBps) [2024-11-17T01:48:49.506Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-17T01:48:50.450Z] Copying: 482/1024 [MB] (16 MBps) [2024-11-17T01:48:51.395Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-17T01:48:52.339Z] Copying: 509/1024 [MB] (16 MBps) [2024-11-17T01:48:53.332Z] Copying: 523/1024 [MB] (13 MBps) [2024-11-17T01:48:54.276Z] Copying: 546/1024 [MB] (23 MBps) [2024-11-17T01:48:55.221Z] Copying: 557/1024 [MB] (11 MBps) [2024-11-17T01:48:56.608Z] Copying: 575/1024 [MB] (18 MBps) [2024-11-17T01:48:57.552Z] Copying: 608/1024 [MB] (32 MBps) [2024-11-17T01:48:58.495Z] Copying: 639/1024 [MB] (30 MBps) [2024-11-17T01:48:59.444Z] Copying: 659/1024 [MB] (20 MBps) [2024-11-17T01:49:00.386Z] Copying: 677/1024 [MB] (17 MBps) [2024-11-17T01:49:01.331Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-17T01:49:02.275Z] Copying: 705/1024 [MB] (16 MBps) [2024-11-17T01:49:03.220Z] Copying: 720/1024 [MB] (14 MBps) [2024-11-17T01:49:04.607Z] Copying: 733/1024 [MB] (13 MBps) [2024-11-17T01:49:05.552Z] Copying: 752/1024 [MB] (19 MBps) [2024-11-17T01:49:06.495Z] Copying: 770/1024 [MB] (17 MBps) [2024-11-17T01:49:07.440Z] Copying: 804/1024 [MB] (34 MBps) [2024-11-17T01:49:08.384Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-17T01:49:09.329Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-17T01:49:10.272Z] Copying: 839/1024 [MB] (12 MBps) [2024-11-17T01:49:11.659Z] Copying: 867/1024 [MB] (28 MBps) [2024-11-17T01:49:12.232Z] Copying: 895/1024 [MB] (28 MBps) [2024-11-17T01:49:13.619Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-17T01:49:14.564Z] Copying: 927/1024 [MB] (20 MBps) [2024-11-17T01:49:15.508Z] Copying: 954/1024 [MB] (26 MBps) [2024-11-17T01:49:16.454Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-17T01:49:17.397Z] Copying: 975/1024 [MB] (10 MBps) [2024-11-17T01:49:18.342Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-17T01:49:19.290Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-17T01:49:20.235Z] Copying: 1008/1024 [MB] (12 MBps) [2024-11-17T01:49:20.498Z] Copying: 1021/1024 [MB] (12 MBps) [2024-11-17T01:49:20.498Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 01:49:20.355011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.039 [2024-11-17 01:49:20.355078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:12.039 [2024-11-17 01:49:20.355095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:12.039 [2024-11-17 01:49:20.355105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.039 [2024-11-17 01:49:20.355127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:12.039 [2024-11-17 01:49:20.358200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.039 [2024-11-17 01:49:20.358250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:12.039 [2024-11-17 01:49:20.358263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:29:12.039 [2024-11-17 01:49:20.358271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.039 [2024-11-17 01:49:20.361021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.039 [2024-11-17 01:49:20.361068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:12.039 [2024-11-17 01:49:20.361079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:29:12.039 [2024-11-17 01:49:20.361088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.039 [2024-11-17 01:49:20.361114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.039 [2024-11-17 01:49:20.361124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:12.039 [2024-11-17 01:49:20.361133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:12.039 [2024-11-17 01:49:20.361141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.039 [2024-11-17 01:49:20.361199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.039 [2024-11-17 01:49:20.361211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:12.039 [2024-11-17 01:49:20.361220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:12.039 [2024-11-17 01:49:20.361227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.039 [2024-11-17 01:49:20.361241] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:12.039 [2024-11-17 01:49:20.361255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:12.039 [2024-11-17 01:49:20.361541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.361994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:12.040 [2024-11-17 01:49:20.362073] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:12.040 [2024-11-17 01:49:20.362081] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2726e5b2-d172-4a51-bda9-409ff1009ced 00:29:12.040 [2024-11-17 01:49:20.362089] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:12.040 [2024-11-17 01:49:20.362097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:12.040 [2024-11-17 01:49:20.362105] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:12.040 [2024-11-17 01:49:20.362112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:12.040 [2024-11-17 01:49:20.362124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:12.040 [2024-11-17 01:49:20.362132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:12.040 [2024-11-17 01:49:20.362140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:12.040 [2024-11-17 01:49:20.362147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:12.040 [2024-11-17 01:49:20.362153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:12.040 [2024-11-17 01:49:20.362160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.040 [2024-11-17 01:49:20.362168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:12.040 [2024-11-17 01:49:20.362176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:29:12.040 [2024-11-17 01:49:20.362184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.040 [2024-11-17 01:49:20.375740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.040 [2024-11-17 01:49:20.375800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:12.040 [2024-11-17 01:49:20.375818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.540 ms 00:29:12.040 [2024-11-17 01:49:20.375826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.040 [2024-11-17 01:49:20.376216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.040 [2024-11-17 01:49:20.376238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:12.040 [2024-11-17 01:49:20.376247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:29:12.040 [2024-11-17 01:49:20.376254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.040 [2024-11-17 01:49:20.413348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.040 [2024-11-17 01:49:20.413402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.040 [2024-11-17 01:49:20.413414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.040 [2024-11-17 01:49:20.413423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.040 [2024-11-17 01:49:20.413491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.040 [2024-11-17 01:49:20.413501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.040 [2024-11-17 01:49:20.413510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.041 [2024-11-17 01:49:20.413519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.041 [2024-11-17 01:49:20.413592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.041 [2024-11-17 01:49:20.413603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.041 [2024-11-17 01:49:20.413616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.041 [2024-11-17 01:49:20.413626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.041 [2024-11-17 01:49:20.413642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.041 [2024-11-17 01:49:20.413651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.041 [2024-11-17 01:49:20.413659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.041 [2024-11-17 01:49:20.413672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.497474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.497530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.302 [2024-11-17 01:49:20.497551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.497559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.566915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.566980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.302 [2024-11-17 01:49:20.566993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.302 [2024-11-17 01:49:20.567080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.302 [2024-11-17 01:49:20.567172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.302 [2024-11-17 01:49:20.567311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:12.302 [2024-11-17 01:49:20.567377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.302 [2024-11-17 01:49:20.567441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.302 [2024-11-17 01:49:20.567508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.302 [2024-11-17 01:49:20.567517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.302 [2024-11-17 01:49:20.567525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.302 [2024-11-17 01:49:20.567657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 212.606 ms, result 0 00:29:13.246 00:29:13.246 00:29:13.246 01:49:21 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:13.246 [2024-11-17 01:49:21.444448] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:13.246 [2024-11-17 01:49:21.444588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82246 ] 00:29:13.246 [2024-11-17 01:49:21.609776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.506 [2024-11-17 01:49:21.726960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.768 [2024-11-17 01:49:22.016606] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.768 [2024-11-17 01:49:22.016680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.768 [2024-11-17 01:49:22.178567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.178629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:13.768 [2024-11-17 01:49:22.178649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:13.768 [2024-11-17 01:49:22.178658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.178716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.178727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.768 [2024-11-17 01:49:22.178739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:13.768 [2024-11-17 01:49:22.178747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.178768] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:13.768 [2024-11-17 01:49:22.179559] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:13.768 [2024-11-17 01:49:22.179592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.179600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.768 [2024-11-17 01:49:22.179610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:29:13.768 [2024-11-17 01:49:22.179618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.180203] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:13.768 [2024-11-17 01:49:22.180277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.180289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:13.768 [2024-11-17 01:49:22.180306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:29:13.768 [2024-11-17 01:49:22.180315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.180431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.180443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:13.768 [2024-11-17 01:49:22.180453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:13.768 [2024-11-17 01:49:22.180462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.180768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.180814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.768 [2024-11-17 01:49:22.180824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:29:13.768 [2024-11-17 01:49:22.180832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.180907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.180919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.768 [2024-11-17 01:49:22.180929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:13.768 [2024-11-17 01:49:22.180936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.180964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.180972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:13.768 [2024-11-17 01:49:22.180981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:13.768 [2024-11-17 01:49:22.180992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.181014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:13.768 [2024-11-17 01:49:22.185307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.185367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.768 [2024-11-17 01:49:22.185378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:29:13.768 [2024-11-17 01:49:22.185386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.185420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.768 [2024-11-17 01:49:22.185429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:13.768 [2024-11-17 01:49:22.185437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:13.768 [2024-11-17 01:49:22.185445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.768 [2024-11-17 01:49:22.185506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:13.768 [2024-11-17 01:49:22.185532] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:13.768 [2024-11-17 01:49:22.185573] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:13.768 [2024-11-17 01:49:22.185590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:13.769 [2024-11-17 01:49:22.185695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:13.769 [2024-11-17 01:49:22.185706] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:13.769 [2024-11-17 01:49:22.185716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:13.769 [2024-11-17 01:49:22.185727] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:13.769 [2024-11-17 01:49:22.185736] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:13.769 [2024-11-17 01:49:22.185744] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:13.769 [2024-11-17 01:49:22.185754] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:13.769 [2024-11-17 01:49:22.185762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:13.769 [2024-11-17 01:49:22.185769] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:13.769 [2024-11-17 01:49:22.185777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.769 [2024-11-17 01:49:22.185785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:13.769 [2024-11-17 01:49:22.185812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:29:13.769 [2024-11-17 01:49:22.185820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.769 [2024-11-17 01:49:22.185903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.769 [2024-11-17 01:49:22.185913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:13.769 [2024-11-17 01:49:22.185921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:13.769 [2024-11-17 01:49:22.185932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.769 [2024-11-17 01:49:22.186036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:13.769 [2024-11-17 01:49:22.186048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:13.769 [2024-11-17 01:49:22.186057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:13.769 [2024-11-17 01:49:22.186082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:13.769 [2024-11-17 01:49:22.186104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.769 [2024-11-17 01:49:22.186120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:13.769 [2024-11-17 01:49:22.186129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:13.769 [2024-11-17 01:49:22.186137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.769 [2024-11-17 01:49:22.186144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:13.769 [2024-11-17 01:49:22.186153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:13.769 [2024-11-17 01:49:22.186160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:13.769 [2024-11-17 01:49:22.186182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:13.769 [2024-11-17 01:49:22.186205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:13.769 [2024-11-17 01:49:22.186227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:13.769 [2024-11-17 01:49:22.186247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:13.769 [2024-11-17 01:49:22.186269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:13.769 [2024-11-17 01:49:22.186290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.769 [2024-11-17 01:49:22.186304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:13.769 [2024-11-17 01:49:22.186311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:13.769 [2024-11-17 01:49:22.186318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.769 [2024-11-17 01:49:22.186324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:13.769 [2024-11-17 01:49:22.186331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:13.769 [2024-11-17 01:49:22.186337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:13.769 [2024-11-17 01:49:22.186350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:13.769 [2024-11-17 01:49:22.186356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186364] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:13.769 [2024-11-17 01:49:22.186372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:13.769 [2024-11-17 01:49:22.186380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.769 [2024-11-17 01:49:22.186395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:13.769 [2024-11-17 01:49:22.186403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:13.769 [2024-11-17 01:49:22.186413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:13.769 [2024-11-17 01:49:22.186421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:13.769 [2024-11-17 01:49:22.186428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:13.769 [2024-11-17 01:49:22.186434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:13.769 [2024-11-17 01:49:22.186444] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:13.769 [2024-11-17 01:49:22.186457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:13.769 [2024-11-17 01:49:22.186473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:13.769 [2024-11-17 01:49:22.186481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:13.769 [2024-11-17 01:49:22.186489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:13.769 [2024-11-17 01:49:22.186497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:13.769 [2024-11-17 01:49:22.186505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:13.769 [2024-11-17 01:49:22.186512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:13.769 [2024-11-17 01:49:22.186520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:13.769 [2024-11-17 01:49:22.186528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:13.769 [2024-11-17 01:49:22.186536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:13.769 [2024-11-17 01:49:22.186578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:13.769 [2024-11-17 01:49:22.186588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:13.769 [2024-11-17 01:49:22.186603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:13.769 [2024-11-17 01:49:22.186611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:13.769 [2024-11-17 01:49:22.186619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:13.769 [2024-11-17 01:49:22.186629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.769 [2024-11-17 01:49:22.186637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:13.769 [2024-11-17 01:49:22.186645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:29:13.769 [2024-11-17 01:49:22.186653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.769 [2024-11-17 01:49:22.214747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.769 [2024-11-17 01:49:22.214804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.770 [2024-11-17 01:49:22.214816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.053 ms 00:29:13.770 [2024-11-17 01:49:22.214825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.770 [2024-11-17 01:49:22.214915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.770 [2024-11-17 01:49:22.214923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:13.770 [2024-11-17 01:49:22.214933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:13.770 [2024-11-17 01:49:22.214944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.259738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.030 [2024-11-17 01:49:22.259803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:14.030 [2024-11-17 01:49:22.259817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.735 ms 00:29:14.030 [2024-11-17 01:49:22.259826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.259878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.030 [2024-11-17 01:49:22.259889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:14.030 [2024-11-17 01:49:22.259899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:14.030 [2024-11-17 01:49:22.259907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.260023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.030 [2024-11-17 01:49:22.260036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:14.030 [2024-11-17 01:49:22.260045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:14.030 [2024-11-17 01:49:22.260053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.260181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.030 [2024-11-17 01:49:22.260194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:14.030 [2024-11-17 01:49:22.260203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:14.030 [2024-11-17 01:49:22.260212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.275844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.030 [2024-11-17 01:49:22.275895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:14.030 [2024-11-17 01:49:22.275907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.612 ms 00:29:14.030 [2024-11-17 01:49:22.275915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.030 [2024-11-17 01:49:22.276060] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:14.031 [2024-11-17 01:49:22.276078] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:14.031 [2024-11-17 01:49:22.276088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.276099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:14.031 [2024-11-17 01:49:22.276108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:14.031 [2024-11-17 01:49:22.276116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.288409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.288456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:14.031 [2024-11-17 01:49:22.288467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.276 ms 00:29:14.031 [2024-11-17 01:49:22.288475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.288605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.288615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:14.031 [2024-11-17 01:49:22.288625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:14.031 [2024-11-17 01:49:22.288639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.288688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.288698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:14.031 [2024-11-17 01:49:22.288707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:14.031 [2024-11-17 01:49:22.288716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.289320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.289342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:14.031 [2024-11-17 01:49:22.289352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:29:14.031 [2024-11-17 01:49:22.289360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.289378] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:14.031 [2024-11-17 01:49:22.289392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.289400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:14.031 [2024-11-17 01:49:22.289409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:14.031 [2024-11-17 01:49:22.289417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.301911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:14.031 [2024-11-17 01:49:22.302080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.302092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:14.031 [2024-11-17 01:49:22.302102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.644 ms 00:29:14.031 [2024-11-17 01:49:22.302111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.304339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.304379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:14.031 [2024-11-17 01:49:22.304390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:29:14.031 [2024-11-17 01:49:22.304398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.304491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.304502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:14.031 [2024-11-17 01:49:22.304511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:14.031 [2024-11-17 01:49:22.304520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.304545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.304555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:14.031 [2024-11-17 01:49:22.304569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:14.031 [2024-11-17 01:49:22.304576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.304606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:14.031 [2024-11-17 01:49:22.304616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.304625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:14.031 [2024-11-17 01:49:22.304633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:14.031 [2024-11-17 01:49:22.304641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.331421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.331480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:14.031 [2024-11-17 01:49:22.331493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.757 ms 00:29:14.031 [2024-11-17 01:49:22.331501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.331592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.031 [2024-11-17 01:49:22.331602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:14.031 [2024-11-17 01:49:22.331612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:14.031 [2024-11-17 01:49:22.331620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.031 [2024-11-17 01:49:22.332776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.742 ms, result 0 00:29:15.418  [2024-11-17T01:49:24.902Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-17T01:49:25.839Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-17T01:49:26.775Z] Copying: 42/1024 [MB] (11 MBps) [2024-11-17T01:49:27.710Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-17T01:49:28.645Z] Copying: 65/1024 [MB] (11 MBps) [2024-11-17T01:49:29.579Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-17T01:49:30.958Z] Copying: 88/1024 [MB] (11 MBps) [2024-11-17T01:49:31.530Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-17T01:49:32.913Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-17T01:49:33.854Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-17T01:49:34.797Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-17T01:49:35.741Z] Copying: 142/1024 [MB] (10 MBps) [2024-11-17T01:49:36.685Z] Copying: 153/1024 [MB] (10 MBps) [2024-11-17T01:49:37.629Z] Copying: 164/1024 [MB] (10 MBps) [2024-11-17T01:49:38.573Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-17T01:49:39.959Z] Copying: 192/1024 [MB] (17 MBps) [2024-11-17T01:49:40.532Z] Copying: 223/1024 [MB] (31 MBps) [2024-11-17T01:49:41.917Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-17T01:49:42.860Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-17T01:49:43.804Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-17T01:49:44.747Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-17T01:49:45.690Z] Copying: 281/1024 [MB] (16 MBps) [2024-11-17T01:49:46.634Z] Copying: 293/1024 [MB] (11 MBps) [2024-11-17T01:49:47.578Z] Copying: 308/1024 [MB] (14 MBps) [2024-11-17T01:49:48.965Z] Copying: 322/1024 [MB] (13 MBps) [2024-11-17T01:49:49.535Z] Copying: 339/1024 [MB] (16 MBps) [2024-11-17T01:49:50.919Z] Copying: 352/1024 [MB] (13 MBps) [2024-11-17T01:49:51.857Z] Copying: 367/1024 [MB] (14 MBps) [2024-11-17T01:49:52.798Z] Copying: 381/1024 [MB] (14 MBps) [2024-11-17T01:49:53.743Z] Copying: 398/1024 [MB] (16 MBps) [2024-11-17T01:49:54.689Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-17T01:49:55.630Z] Copying: 430/1024 [MB] (20 MBps) [2024-11-17T01:49:56.609Z] Copying: 448/1024 [MB] (17 MBps) [2024-11-17T01:49:57.558Z] Copying: 465/1024 [MB] (17 MBps) [2024-11-17T01:49:58.946Z] Copying: 484/1024 [MB] (19 MBps) [2024-11-17T01:49:59.891Z] Copying: 500/1024 [MB] (15 MBps) [2024-11-17T01:50:00.834Z] Copying: 514/1024 [MB] (13 MBps) [2024-11-17T01:50:01.783Z] Copying: 531/1024 [MB] (16 MBps) [2024-11-17T01:50:02.732Z] Copying: 543/1024 [MB] (12 MBps) [2024-11-17T01:50:03.699Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-17T01:50:04.643Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-17T01:50:05.587Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-17T01:50:06.530Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-17T01:50:07.916Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-17T01:50:08.860Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-17T01:50:09.804Z] Copying: 625/1024 [MB] (13 MBps) [2024-11-17T01:50:10.747Z] Copying: 643/1024 [MB] (18 MBps) [2024-11-17T01:50:11.692Z] Copying: 660/1024 [MB] (17 MBps) [2024-11-17T01:50:12.634Z] Copying: 677/1024 [MB] (16 MBps) [2024-11-17T01:50:13.577Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-17T01:50:14.964Z] Copying: 701/1024 [MB] (12 MBps) [2024-11-17T01:50:15.536Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-17T01:50:17.014Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-17T01:50:17.669Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-17T01:50:18.614Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-17T01:50:19.558Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-17T01:50:20.949Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-17T01:50:21.522Z] Copying: 778/1024 [MB] (11 MBps) [2024-11-17T01:50:22.909Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-17T01:50:23.853Z] Copying: 806/1024 [MB] (17 MBps) [2024-11-17T01:50:24.801Z] Copying: 825/1024 [MB] (18 MBps) [2024-11-17T01:50:25.745Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-17T01:50:26.688Z] Copying: 856/1024 [MB] (15 MBps) [2024-11-17T01:50:27.630Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-17T01:50:28.572Z] Copying: 881/1024 [MB] (13 MBps) [2024-11-17T01:50:29.957Z] Copying: 895/1024 [MB] (13 MBps) [2024-11-17T01:50:30.528Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-17T01:50:31.915Z] Copying: 924/1024 [MB] (18 MBps) [2024-11-17T01:50:32.857Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-17T01:50:33.797Z] Copying: 946/1024 [MB] (10 MBps) [2024-11-17T01:50:34.739Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-17T01:50:35.682Z] Copying: 967/1024 [MB] (10 MBps) [2024-11-17T01:50:36.628Z] Copying: 978/1024 [MB] (11 MBps) [2024-11-17T01:50:37.576Z] Copying: 989/1024 [MB] (11 MBps) [2024-11-17T01:50:38.524Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-17T01:50:39.915Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-17T01:50:39.915Z] Copying: 1022/1024 [MB] (11 MBps) [2024-11-17T01:50:39.915Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 01:50:39.756899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.456 [2024-11-17 01:50:39.757001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:31.456 [2024-11-17 01:50:39.757023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:31.456 [2024-11-17 01:50:39.757037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.456 [2024-11-17 01:50:39.757070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:31.456 [2024-11-17 01:50:39.762528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.456 [2024-11-17 01:50:39.762590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:31.456 [2024-11-17 01:50:39.762606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:30:31.456 [2024-11-17 01:50:39.762618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.456 [2024-11-17 01:50:39.762975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.456 [2024-11-17 01:50:39.762993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:31.456 [2024-11-17 01:50:39.763007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:30:31.456 [2024-11-17 01:50:39.763018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.456 [2024-11-17 01:50:39.763062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.457 [2024-11-17 01:50:39.763081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:31.457 [2024-11-17 01:50:39.763094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:31.457 [2024-11-17 01:50:39.763105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.457 [2024-11-17 01:50:39.763180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.457 [2024-11-17 01:50:39.763194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:31.457 [2024-11-17 01:50:39.763221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:31.457 [2024-11-17 01:50:39.763235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.457 [2024-11-17 01:50:39.763256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:31.457 [2024-11-17 01:50:39.763275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.763999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:31.457 [2024-11-17 01:50:39.764177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:31.458 [2024-11-17 01:50:39.764493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:31.458 [2024-11-17 01:50:39.764505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2726e5b2-d172-4a51-bda9-409ff1009ced 00:30:31.458 [2024-11-17 01:50:39.764521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:31.458 [2024-11-17 01:50:39.764531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:31.458 [2024-11-17 01:50:39.764542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:31.458 [2024-11-17 01:50:39.764554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:31.458 [2024-11-17 01:50:39.764565] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:31.458 [2024-11-17 01:50:39.764577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:31.458 [2024-11-17 01:50:39.764588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:31.458 [2024-11-17 01:50:39.764597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:31.458 [2024-11-17 01:50:39.764607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:31.458 [2024-11-17 01:50:39.764617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.458 [2024-11-17 01:50:39.764629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:31.458 [2024-11-17 01:50:39.764642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:30:31.458 [2024-11-17 01:50:39.764653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.779034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.458 [2024-11-17 01:50:39.779088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:31.458 [2024-11-17 01:50:39.779101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.356 ms 00:30:31.458 [2024-11-17 01:50:39.779110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.779536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.458 [2024-11-17 01:50:39.779559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:31.458 [2024-11-17 01:50:39.779569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:30:31.458 [2024-11-17 01:50:39.779585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.815946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.458 [2024-11-17 01:50:39.816001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:31.458 [2024-11-17 01:50:39.816012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.458 [2024-11-17 01:50:39.816021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.816092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.458 [2024-11-17 01:50:39.816101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:31.458 [2024-11-17 01:50:39.816110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.458 [2024-11-17 01:50:39.816124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.816179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.458 [2024-11-17 01:50:39.816189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:31.458 [2024-11-17 01:50:39.816198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.458 [2024-11-17 01:50:39.816206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.816223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.458 [2024-11-17 01:50:39.816232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:31.458 [2024-11-17 01:50:39.816241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.458 [2024-11-17 01:50:39.816248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.458 [2024-11-17 01:50:39.899607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.458 [2024-11-17 01:50:39.899668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:31.458 [2024-11-17 01:50:39.899682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.458 [2024-11-17 01:50:39.899691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.968880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.968947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:31.721 [2024-11-17 01:50:39.968960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.968970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:31.721 [2024-11-17 01:50:39.969076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:31.721 [2024-11-17 01:50:39.969145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:31.721 [2024-11-17 01:50:39.969259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:31.721 [2024-11-17 01:50:39.969318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:31.721 [2024-11-17 01:50:39.969390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.721 [2024-11-17 01:50:39.969453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:31.721 [2024-11-17 01:50:39.969462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.721 [2024-11-17 01:50:39.969470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.721 [2024-11-17 01:50:39.969603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 212.676 ms, result 0 00:30:32.294 00:30:32.294 00:30:32.294 01:50:40 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:34.846 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:34.846 01:50:42 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:34.846 [2024-11-17 01:50:42.903673] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:34.846 [2024-11-17 01:50:42.903763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83060 ] 00:30:34.846 [2024-11-17 01:50:43.055286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.846 [2024-11-17 01:50:43.172142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.108 [2024-11-17 01:50:43.460518] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:35.108 [2024-11-17 01:50:43.460606] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:35.371 [2024-11-17 01:50:43.619144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.619230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:35.371 [2024-11-17 01:50:43.619252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.371 [2024-11-17 01:50:43.619260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.619319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.619330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:35.371 [2024-11-17 01:50:43.619342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:35.371 [2024-11-17 01:50:43.619350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.619371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:35.371 [2024-11-17 01:50:43.620451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:35.371 [2024-11-17 01:50:43.620514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.620525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:35.371 [2024-11-17 01:50:43.620536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:30:35.371 [2024-11-17 01:50:43.620544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.620947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:35.371 [2024-11-17 01:50:43.620977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.620987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:35.371 [2024-11-17 01:50:43.621001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:35.371 [2024-11-17 01:50:43.621010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.621063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.621073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:35.371 [2024-11-17 01:50:43.621082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:35.371 [2024-11-17 01:50:43.621089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.621363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.621388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:35.371 [2024-11-17 01:50:43.621398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:35.371 [2024-11-17 01:50:43.621405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.621475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.621485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:35.371 [2024-11-17 01:50:43.621494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:35.371 [2024-11-17 01:50:43.621502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.621527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.621536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:35.371 [2024-11-17 01:50:43.621544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:35.371 [2024-11-17 01:50:43.621556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.621578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:35.371 [2024-11-17 01:50:43.625813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.625857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:35.371 [2024-11-17 01:50:43.625868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.239 ms 00:30:35.371 [2024-11-17 01:50:43.625876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.371 [2024-11-17 01:50:43.625910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.371 [2024-11-17 01:50:43.625919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:35.371 [2024-11-17 01:50:43.625927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:35.372 [2024-11-17 01:50:43.625935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.372 [2024-11-17 01:50:43.625997] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:35.372 [2024-11-17 01:50:43.626020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:35.372 [2024-11-17 01:50:43.626058] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:35.372 [2024-11-17 01:50:43.626074] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:35.372 [2024-11-17 01:50:43.626181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:35.372 [2024-11-17 01:50:43.626191] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:35.372 [2024-11-17 01:50:43.626202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:35.372 [2024-11-17 01:50:43.626213] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:35.372 [2024-11-17 01:50:43.626243] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:35.372 [2024-11-17 01:50:43.626251] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:35.372 [2024-11-17 01:50:43.626259] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:35.372 [2024-11-17 01:50:43.626267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.372 [2024-11-17 01:50:43.626274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:35.372 [2024-11-17 01:50:43.626282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:35.372 [2024-11-17 01:50:43.626289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.372 [2024-11-17 01:50:43.626373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.372 [2024-11-17 01:50:43.626382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:35.372 [2024-11-17 01:50:43.626390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:35.372 [2024-11-17 01:50:43.626399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.372 [2024-11-17 01:50:43.626504] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:35.372 [2024-11-17 01:50:43.626515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:35.372 [2024-11-17 01:50:43.626524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:35.372 [2024-11-17 01:50:43.626546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:35.372 [2024-11-17 01:50:43.626570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.372 [2024-11-17 01:50:43.626583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:35.372 [2024-11-17 01:50:43.626593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:35.372 [2024-11-17 01:50:43.626599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.372 [2024-11-17 01:50:43.626606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:35.372 [2024-11-17 01:50:43.626614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:35.372 [2024-11-17 01:50:43.626620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:35.372 [2024-11-17 01:50:43.626642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:35.372 [2024-11-17 01:50:43.626663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:35.372 [2024-11-17 01:50:43.626684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:35.372 [2024-11-17 01:50:43.626704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:35.372 [2024-11-17 01:50:43.626724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:35.372 [2024-11-17 01:50:43.626743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.372 [2024-11-17 01:50:43.626757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:35.372 [2024-11-17 01:50:43.626764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:35.372 [2024-11-17 01:50:43.626770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.372 [2024-11-17 01:50:43.626777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:35.372 [2024-11-17 01:50:43.626783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:35.372 [2024-11-17 01:50:43.626810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:35.372 [2024-11-17 01:50:43.626823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:35.372 [2024-11-17 01:50:43.626830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626840] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:35.372 [2024-11-17 01:50:43.626848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:35.372 [2024-11-17 01:50:43.626858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.372 [2024-11-17 01:50:43.626874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:35.372 [2024-11-17 01:50:43.626881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:35.372 [2024-11-17 01:50:43.626888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:35.372 [2024-11-17 01:50:43.626895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:35.372 [2024-11-17 01:50:43.626902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:35.372 [2024-11-17 01:50:43.626908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:35.372 [2024-11-17 01:50:43.626917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:35.372 [2024-11-17 01:50:43.626929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.372 [2024-11-17 01:50:43.626938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:35.372 [2024-11-17 01:50:43.626946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:35.372 [2024-11-17 01:50:43.626953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:35.372 [2024-11-17 01:50:43.626961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:35.372 [2024-11-17 01:50:43.626969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:35.372 [2024-11-17 01:50:43.626977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:35.372 [2024-11-17 01:50:43.626986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:35.372 [2024-11-17 01:50:43.626993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:35.372 [2024-11-17 01:50:43.627000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:35.372 [2024-11-17 01:50:43.627008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:35.372 [2024-11-17 01:50:43.627015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:35.373 [2024-11-17 01:50:43.627023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:35.373 [2024-11-17 01:50:43.627030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:35.373 [2024-11-17 01:50:43.627038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:35.373 [2024-11-17 01:50:43.627046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:35.373 [2024-11-17 01:50:43.627054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.373 [2024-11-17 01:50:43.627062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:35.373 [2024-11-17 01:50:43.627069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:35.373 [2024-11-17 01:50:43.627076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:35.373 [2024-11-17 01:50:43.627084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:35.373 [2024-11-17 01:50:43.627092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.627101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:35.373 [2024-11-17 01:50:43.627108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:30:35.373 [2024-11-17 01:50:43.627116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.654997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.655049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.373 [2024-11-17 01:50:43.655061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.840 ms 00:30:35.373 [2024-11-17 01:50:43.655070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.655155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.655165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:35.373 [2024-11-17 01:50:43.655174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:35.373 [2024-11-17 01:50:43.655186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.700214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.700273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:35.373 [2024-11-17 01:50:43.700286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.944 ms 00:30:35.373 [2024-11-17 01:50:43.700296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.700353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.700363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:35.373 [2024-11-17 01:50:43.700373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:35.373 [2024-11-17 01:50:43.700381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.700495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.700507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:35.373 [2024-11-17 01:50:43.700516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:35.373 [2024-11-17 01:50:43.700525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.700653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.700675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:35.373 [2024-11-17 01:50:43.700684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:35.373 [2024-11-17 01:50:43.700692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.716460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.716513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:35.373 [2024-11-17 01:50:43.716525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.748 ms 00:30:35.373 [2024-11-17 01:50:43.716533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.716683] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:35.373 [2024-11-17 01:50:43.716698] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:35.373 [2024-11-17 01:50:43.716708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.716720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:35.373 [2024-11-17 01:50:43.716729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:35.373 [2024-11-17 01:50:43.716736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.729032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.729088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:35.373 [2024-11-17 01:50:43.729099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.279 ms 00:30:35.373 [2024-11-17 01:50:43.729107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.729238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.729248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:35.373 [2024-11-17 01:50:43.729257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:30:35.373 [2024-11-17 01:50:43.729271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.729323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.729335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:35.373 [2024-11-17 01:50:43.729343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:35.373 [2024-11-17 01:50:43.729351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.729962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.729985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:35.373 [2024-11-17 01:50:43.729995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:35.373 [2024-11-17 01:50:43.730004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.730022] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:35.373 [2024-11-17 01:50:43.730038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.730047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:35.373 [2024-11-17 01:50:43.730055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:35.373 [2024-11-17 01:50:43.730063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.742829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:35.373 [2024-11-17 01:50:43.743013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.743024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:35.373 [2024-11-17 01:50:43.743035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.930 ms 00:30:35.373 [2024-11-17 01:50:43.743043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.745213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.745257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:35.373 [2024-11-17 01:50:43.745267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:30:35.373 [2024-11-17 01:50:43.745275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.745368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.745378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:35.373 [2024-11-17 01:50:43.745387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:35.373 [2024-11-17 01:50:43.745395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.745418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.745428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:35.373 [2024-11-17 01:50:43.745440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.373 [2024-11-17 01:50:43.745448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.745478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:35.373 [2024-11-17 01:50:43.745488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.373 [2024-11-17 01:50:43.745496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:35.373 [2024-11-17 01:50:43.745504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:35.373 [2024-11-17 01:50:43.745511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.373 [2024-11-17 01:50:43.771747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.374 [2024-11-17 01:50:43.771818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:35.374 [2024-11-17 01:50:43.771831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.215 ms 00:30:35.374 [2024-11-17 01:50:43.771840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.374 [2024-11-17 01:50:43.771934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.374 [2024-11-17 01:50:43.771944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:35.374 [2024-11-17 01:50:43.771953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:35.374 [2024-11-17 01:50:43.771960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.374 [2024-11-17 01:50:43.773189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.550 ms, result 0 00:30:36.762  [2024-11-17T01:50:45.795Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:50:47.184Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-17T01:50:48.131Z] Copying: 31/1024 [MB] (11 MBps) [2024-11-17T01:50:49.078Z] Copying: 47/1024 [MB] (15 MBps) [2024-11-17T01:50:50.021Z] Copying: 69/1024 [MB] (22 MBps) [2024-11-17T01:50:50.965Z] Copying: 88/1024 [MB] (19 MBps) [2024-11-17T01:50:51.909Z] Copying: 106/1024 [MB] (17 MBps) [2024-11-17T01:50:52.853Z] Copying: 125/1024 [MB] (19 MBps) [2024-11-17T01:50:53.798Z] Copying: 149/1024 [MB] (24 MBps) [2024-11-17T01:50:55.186Z] Copying: 164/1024 [MB] (14 MBps) [2024-11-17T01:50:56.130Z] Copying: 182/1024 [MB] (18 MBps) [2024-11-17T01:50:57.074Z] Copying: 200/1024 [MB] (18 MBps) [2024-11-17T01:50:58.083Z] Copying: 213/1024 [MB] (13 MBps) [2024-11-17T01:50:59.073Z] Copying: 233/1024 [MB] (19 MBps) [2024-11-17T01:51:00.012Z] Copying: 255/1024 [MB] (21 MBps) [2024-11-17T01:51:00.951Z] Copying: 275/1024 [MB] (19 MBps) [2024-11-17T01:51:01.891Z] Copying: 296/1024 [MB] (20 MBps) [2024-11-17T01:51:02.831Z] Copying: 312/1024 [MB] (16 MBps) [2024-11-17T01:51:04.215Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-17T01:51:04.788Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-17T01:51:06.171Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-17T01:51:07.115Z] Copying: 358/1024 [MB] (10 MBps) [2024-11-17T01:51:08.059Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-17T01:51:09.002Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-17T01:51:09.945Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-17T01:51:10.887Z] Copying: 413/1024 [MB] (22 MBps) [2024-11-17T01:51:11.826Z] Copying: 442/1024 [MB] (29 MBps) [2024-11-17T01:51:13.210Z] Copying: 477/1024 [MB] (34 MBps) [2024-11-17T01:51:14.153Z] Copying: 497/1024 [MB] (20 MBps) [2024-11-17T01:51:15.095Z] Copying: 517/1024 [MB] (19 MBps) [2024-11-17T01:51:16.037Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-17T01:51:16.979Z] Copying: 555/1024 [MB] (26 MBps) [2024-11-17T01:51:17.920Z] Copying: 576/1024 [MB] (21 MBps) [2024-11-17T01:51:18.865Z] Copying: 603/1024 [MB] (26 MBps) [2024-11-17T01:51:19.808Z] Copying: 633/1024 [MB] (29 MBps) [2024-11-17T01:51:21.194Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-17T01:51:22.139Z] Copying: 675/1024 [MB] (30 MBps) [2024-11-17T01:51:23.085Z] Copying: 690/1024 [MB] (15 MBps) [2024-11-17T01:51:24.030Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-17T01:51:24.975Z] Copying: 722/1024 [MB] (20 MBps) [2024-11-17T01:51:25.921Z] Copying: 736/1024 [MB] (14 MBps) [2024-11-17T01:51:26.916Z] Copying: 755/1024 [MB] (18 MBps) [2024-11-17T01:51:27.859Z] Copying: 780/1024 [MB] (25 MBps) [2024-11-17T01:51:28.803Z] Copying: 801/1024 [MB] (20 MBps) [2024-11-17T01:51:30.191Z] Copying: 829/1024 [MB] (27 MBps) [2024-11-17T01:51:31.133Z] Copying: 852/1024 [MB] (22 MBps) [2024-11-17T01:51:32.075Z] Copying: 872/1024 [MB] (19 MBps) [2024-11-17T01:51:33.019Z] Copying: 892/1024 [MB] (20 MBps) [2024-11-17T01:51:33.963Z] Copying: 906/1024 [MB] (14 MBps) [2024-11-17T01:51:34.907Z] Copying: 919/1024 [MB] (12 MBps) [2024-11-17T01:51:35.850Z] Copying: 932/1024 [MB] (13 MBps) [2024-11-17T01:51:36.793Z] Copying: 960/1024 [MB] (27 MBps) [2024-11-17T01:51:38.179Z] Copying: 978/1024 [MB] (18 MBps) [2024-11-17T01:51:39.125Z] Copying: 994/1024 [MB] (15 MBps) [2024-11-17T01:51:40.073Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-17T01:51:40.647Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-17T01:51:40.647Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 01:51:40.485530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.188 [2024-11-17 01:51:40.485582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:32.188 [2024-11-17 01:51:40.485595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:32.188 [2024-11-17 01:51:40.485601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.188 [2024-11-17 01:51:40.487343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:32.188 [2024-11-17 01:51:40.491297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.188 [2024-11-17 01:51:40.491324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:32.188 [2024-11-17 01:51:40.491333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.922 ms 00:31:32.188 [2024-11-17 01:51:40.491341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.188 [2024-11-17 01:51:40.499360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.188 [2024-11-17 01:51:40.499393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:32.188 [2024-11-17 01:51:40.499402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.794 ms 00:31:32.188 [2024-11-17 01:51:40.499408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.188 [2024-11-17 01:51:40.499428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.188 [2024-11-17 01:51:40.499436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:32.188 [2024-11-17 01:51:40.499442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:32.188 [2024-11-17 01:51:40.499448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.188 [2024-11-17 01:51:40.499486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.188 [2024-11-17 01:51:40.499493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:32.188 [2024-11-17 01:51:40.499501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:32.188 [2024-11-17 01:51:40.499507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.188 [2024-11-17 01:51:40.499517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:32.188 [2024-11-17 01:51:40.499526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:31:32.188 [2024-11-17 01:51:40.499534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:32.188 [2024-11-17 01:51:40.499834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.499994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:32.189 [2024-11-17 01:51:40.500146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:32.189 [2024-11-17 01:51:40.500152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2726e5b2-d172-4a51-bda9-409ff1009ced 00:31:32.189 [2024-11-17 01:51:40.500158] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:31:32.189 [2024-11-17 01:51:40.500164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:31:32.189 [2024-11-17 01:51:40.500169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:31:32.189 [2024-11-17 01:51:40.500175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:31:32.189 [2024-11-17 01:51:40.500181] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:32.189 [2024-11-17 01:51:40.500188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:32.189 [2024-11-17 01:51:40.500195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:32.189 [2024-11-17 01:51:40.500199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:32.189 [2024-11-17 01:51:40.500204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:32.189 [2024-11-17 01:51:40.500210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.189 [2024-11-17 01:51:40.500216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:32.189 [2024-11-17 01:51:40.500222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:31:32.189 [2024-11-17 01:51:40.500227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.510008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.189 [2024-11-17 01:51:40.510034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:32.189 [2024-11-17 01:51:40.510042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.770 ms 00:31:32.189 [2024-11-17 01:51:40.510052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.510322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.189 [2024-11-17 01:51:40.510334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:32.189 [2024-11-17 01:51:40.510340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:31:32.189 [2024-11-17 01:51:40.510345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.536244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.189 [2024-11-17 01:51:40.536273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.189 [2024-11-17 01:51:40.536283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.189 [2024-11-17 01:51:40.536289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.536328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.189 [2024-11-17 01:51:40.536335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.189 [2024-11-17 01:51:40.536341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.189 [2024-11-17 01:51:40.536347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.536399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.189 [2024-11-17 01:51:40.536407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.189 [2024-11-17 01:51:40.536413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.189 [2024-11-17 01:51:40.536420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.536431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.189 [2024-11-17 01:51:40.536437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.189 [2024-11-17 01:51:40.536443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.189 [2024-11-17 01:51:40.536448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.189 [2024-11-17 01:51:40.596562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.189 [2024-11-17 01:51:40.596600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.189 [2024-11-17 01:51:40.596614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.189 [2024-11-17 01:51:40.596620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.451 [2024-11-17 01:51:40.645667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:32.451 [2024-11-17 01:51:40.645730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:32.451 [2024-11-17 01:51:40.645798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:32.451 [2024-11-17 01:51:40.645874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:32.451 [2024-11-17 01:51:40.645913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.645951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:32.451 [2024-11-17 01:51:40.645957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.645963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.645995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.451 [2024-11-17 01:51:40.646002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:32.451 [2024-11-17 01:51:40.646008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.451 [2024-11-17 01:51:40.646014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.451 [2024-11-17 01:51:40.646100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 161.536 ms, result 0 00:31:33.838 00:31:33.838 00:31:33.839 01:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:33.839 [2024-11-17 01:51:42.111981] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:31:33.839 [2024-11-17 01:51:42.112103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83673 ] 00:31:33.839 [2024-11-17 01:51:42.268557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.099 [2024-11-17 01:51:42.346236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.099 [2024-11-17 01:51:42.552743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:34.099 [2024-11-17 01:51:42.552786] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:34.362 [2024-11-17 01:51:42.703703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.703740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:34.362 [2024-11-17 01:51:42.703754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:34.362 [2024-11-17 01:51:42.703760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.703805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.703814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:34.362 [2024-11-17 01:51:42.703822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:34.362 [2024-11-17 01:51:42.703827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.703840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:34.362 [2024-11-17 01:51:42.704565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:34.362 [2024-11-17 01:51:42.704596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.704604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:34.362 [2024-11-17 01:51:42.704612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:31:34.362 [2024-11-17 01:51:42.704618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.704867] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:34.362 [2024-11-17 01:51:42.704891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.704898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:34.362 [2024-11-17 01:51:42.704907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:34.362 [2024-11-17 01:51:42.704913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.704943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.704950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:34.362 [2024-11-17 01:51:42.704956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:34.362 [2024-11-17 01:51:42.704961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.705151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.705167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:34.362 [2024-11-17 01:51:42.705174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:31:34.362 [2024-11-17 01:51:42.705179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.705227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.705235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:34.362 [2024-11-17 01:51:42.705241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:34.362 [2024-11-17 01:51:42.705246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.705262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.705269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:34.362 [2024-11-17 01:51:42.705275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:34.362 [2024-11-17 01:51:42.705283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.362 [2024-11-17 01:51:42.705296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:34.362 [2024-11-17 01:51:42.708137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.362 [2024-11-17 01:51:42.708163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:34.362 [2024-11-17 01:51:42.708171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:31:34.362 [2024-11-17 01:51:42.708177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.363 [2024-11-17 01:51:42.708203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.363 [2024-11-17 01:51:42.708210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:34.363 [2024-11-17 01:51:42.708216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:34.363 [2024-11-17 01:51:42.708221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.363 [2024-11-17 01:51:42.708252] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:34.363 [2024-11-17 01:51:42.708268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:34.363 [2024-11-17 01:51:42.708295] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:34.363 [2024-11-17 01:51:42.708307] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:34.363 [2024-11-17 01:51:42.708386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:34.363 [2024-11-17 01:51:42.708400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:34.363 [2024-11-17 01:51:42.708409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:34.363 [2024-11-17 01:51:42.708416] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708423] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:34.363 [2024-11-17 01:51:42.708437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:34.363 [2024-11-17 01:51:42.708442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:34.363 [2024-11-17 01:51:42.708448] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:34.363 [2024-11-17 01:51:42.708453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.363 [2024-11-17 01:51:42.708459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:34.363 [2024-11-17 01:51:42.708465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:31:34.363 [2024-11-17 01:51:42.708470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.363 [2024-11-17 01:51:42.708535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.363 [2024-11-17 01:51:42.708541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:34.363 [2024-11-17 01:51:42.708546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:34.363 [2024-11-17 01:51:42.708553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.363 [2024-11-17 01:51:42.708628] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:34.363 [2024-11-17 01:51:42.708641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:34.363 [2024-11-17 01:51:42.708647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:34.363 [2024-11-17 01:51:42.708664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:34.363 [2024-11-17 01:51:42.708680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:34.363 [2024-11-17 01:51:42.708690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:34.363 [2024-11-17 01:51:42.708695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:34.363 [2024-11-17 01:51:42.708700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:34.363 [2024-11-17 01:51:42.708705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:34.363 [2024-11-17 01:51:42.708710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:34.363 [2024-11-17 01:51:42.708715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:34.363 [2024-11-17 01:51:42.708729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:34.363 [2024-11-17 01:51:42.708744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:34.363 [2024-11-17 01:51:42.708759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:34.363 [2024-11-17 01:51:42.708772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:34.363 [2024-11-17 01:51:42.708786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:34.363 [2024-11-17 01:51:42.708811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:34.363 [2024-11-17 01:51:42.708821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:34.363 [2024-11-17 01:51:42.708826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:34.363 [2024-11-17 01:51:42.708831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:34.363 [2024-11-17 01:51:42.708836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:34.363 [2024-11-17 01:51:42.708841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:34.363 [2024-11-17 01:51:42.708846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:34.363 [2024-11-17 01:51:42.708856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:34.363 [2024-11-17 01:51:42.708861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:34.363 [2024-11-17 01:51:42.708872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:34.363 [2024-11-17 01:51:42.708878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:34.363 [2024-11-17 01:51:42.708890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:34.363 [2024-11-17 01:51:42.708895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:34.363 [2024-11-17 01:51:42.708900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:34.363 [2024-11-17 01:51:42.708905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:34.363 [2024-11-17 01:51:42.708910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:34.363 [2024-11-17 01:51:42.708915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:34.363 [2024-11-17 01:51:42.708921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:34.363 [2024-11-17 01:51:42.708930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.363 [2024-11-17 01:51:42.708937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:34.363 [2024-11-17 01:51:42.708942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:34.363 [2024-11-17 01:51:42.708949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:34.363 [2024-11-17 01:51:42.708954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:34.363 [2024-11-17 01:51:42.708959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:34.363 [2024-11-17 01:51:42.708965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:34.363 [2024-11-17 01:51:42.708970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:34.363 [2024-11-17 01:51:42.708976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:34.363 [2024-11-17 01:51:42.708981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:34.364 [2024-11-17 01:51:42.708986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.708992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.708997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.709002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.709007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:34.364 [2024-11-17 01:51:42.709012] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:34.364 [2024-11-17 01:51:42.709018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.709024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:34.364 [2024-11-17 01:51:42.709030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:34.364 [2024-11-17 01:51:42.709035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:34.364 [2024-11-17 01:51:42.709040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:34.364 [2024-11-17 01:51:42.709046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.709052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:34.364 [2024-11-17 01:51:42.709058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:31:34.364 [2024-11-17 01:51:42.709063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.727547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.727574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:34.364 [2024-11-17 01:51:42.727582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.455 ms 00:31:34.364 [2024-11-17 01:51:42.727587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.727646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.727652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:34.364 [2024-11-17 01:51:42.727658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:34.364 [2024-11-17 01:51:42.727666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.771473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.771504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:34.364 [2024-11-17 01:51:42.771513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.769 ms 00:31:34.364 [2024-11-17 01:51:42.771520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.771552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.771560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:34.364 [2024-11-17 01:51:42.771567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:34.364 [2024-11-17 01:51:42.771573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.771643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.771651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:34.364 [2024-11-17 01:51:42.771657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:34.364 [2024-11-17 01:51:42.771663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.771751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.771760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:34.364 [2024-11-17 01:51:42.771766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:34.364 [2024-11-17 01:51:42.771771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.782220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.782249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:34.364 [2024-11-17 01:51:42.782257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.435 ms 00:31:34.364 [2024-11-17 01:51:42.782262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.782341] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:34.364 [2024-11-17 01:51:42.782350] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:34.364 [2024-11-17 01:51:42.782357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.782363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:34.364 [2024-11-17 01:51:42.782371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:34.364 [2024-11-17 01:51:42.782377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.791614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.791637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:34.364 [2024-11-17 01:51:42.791644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.226 ms 00:31:34.364 [2024-11-17 01:51:42.791650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.791736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.791743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:34.364 [2024-11-17 01:51:42.791748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:34.364 [2024-11-17 01:51:42.791754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.791780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.791787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:34.364 [2024-11-17 01:51:42.791806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:34.364 [2024-11-17 01:51:42.791811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.792246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.792261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:34.364 [2024-11-17 01:51:42.792267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:31:34.364 [2024-11-17 01:51:42.792273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.792283] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:34.364 [2024-11-17 01:51:42.792292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.792298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:34.364 [2024-11-17 01:51:42.792303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:34.364 [2024-11-17 01:51:42.792309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.800837] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:34.364 [2024-11-17 01:51:42.800943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.800950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:34.364 [2024-11-17 01:51:42.800957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.612 ms 00:31:34.364 [2024-11-17 01:51:42.800962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.802605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.802625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:34.364 [2024-11-17 01:51:42.802633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:31:34.364 [2024-11-17 01:51:42.802639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.802688] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:34.364 [2024-11-17 01:51:42.803040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.803057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:34.364 [2024-11-17 01:51:42.803064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:31:34.364 [2024-11-17 01:51:42.803069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.803096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.364 [2024-11-17 01:51:42.803106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:34.364 [2024-11-17 01:51:42.803112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:34.364 [2024-11-17 01:51:42.803117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.364 [2024-11-17 01:51:42.803140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:34.364 [2024-11-17 01:51:42.803147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.365 [2024-11-17 01:51:42.803152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:34.365 [2024-11-17 01:51:42.803157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:34.365 [2024-11-17 01:51:42.803163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.626 [2024-11-17 01:51:42.821324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.626 [2024-11-17 01:51:42.821353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:34.626 [2024-11-17 01:51:42.821362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.149 ms 00:31:34.626 [2024-11-17 01:51:42.821368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.626 [2024-11-17 01:51:42.821424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.626 [2024-11-17 01:51:42.821432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:34.626 [2024-11-17 01:51:42.821438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:34.626 [2024-11-17 01:51:42.821443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.626 [2024-11-17 01:51:42.822252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.238 ms, result 0 00:31:35.571  [2024-11-17T01:51:44.974Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-17T01:51:46.362Z] Copying: 44/1024 [MB] (17 MBps) [2024-11-17T01:51:47.307Z] Copying: 61/1024 [MB] (17 MBps) [2024-11-17T01:51:48.252Z] Copying: 78/1024 [MB] (17 MBps) [2024-11-17T01:51:49.196Z] Copying: 96/1024 [MB] (17 MBps) [2024-11-17T01:51:50.135Z] Copying: 115/1024 [MB] (19 MBps) [2024-11-17T01:51:51.079Z] Copying: 127/1024 [MB] (11 MBps) [2024-11-17T01:51:52.024Z] Copying: 139/1024 [MB] (12 MBps) [2024-11-17T01:51:52.994Z] Copying: 153/1024 [MB] (13 MBps) [2024-11-17T01:51:54.382Z] Copying: 165/1024 [MB] (11 MBps) [2024-11-17T01:51:55.325Z] Copying: 182/1024 [MB] (17 MBps) [2024-11-17T01:51:56.270Z] Copying: 226/1024 [MB] (43 MBps) [2024-11-17T01:51:57.213Z] Copying: 244/1024 [MB] (18 MBps) [2024-11-17T01:51:58.158Z] Copying: 280/1024 [MB] (35 MBps) [2024-11-17T01:51:59.101Z] Copying: 297/1024 [MB] (17 MBps) [2024-11-17T01:52:00.071Z] Copying: 309/1024 [MB] (11 MBps) [2024-11-17T01:52:01.016Z] Copying: 327/1024 [MB] (18 MBps) [2024-11-17T01:52:02.403Z] Copying: 346/1024 [MB] (19 MBps) [2024-11-17T01:52:02.974Z] Copying: 361/1024 [MB] (14 MBps) [2024-11-17T01:52:04.358Z] Copying: 378/1024 [MB] (17 MBps) [2024-11-17T01:52:05.303Z] Copying: 395/1024 [MB] (16 MBps) [2024-11-17T01:52:06.246Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-17T01:52:07.190Z] Copying: 418/1024 [MB] (11 MBps) [2024-11-17T01:52:08.134Z] Copying: 429/1024 [MB] (11 MBps) [2024-11-17T01:52:09.079Z] Copying: 448/1024 [MB] (18 MBps) [2024-11-17T01:52:10.027Z] Copying: 465/1024 [MB] (17 MBps) [2024-11-17T01:52:10.973Z] Copying: 476/1024 [MB] (10 MBps) [2024-11-17T01:52:12.360Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-17T01:52:13.303Z] Copying: 503/1024 [MB] (16 MBps) [2024-11-17T01:52:14.247Z] Copying: 522/1024 [MB] (18 MBps) [2024-11-17T01:52:15.190Z] Copying: 539/1024 [MB] (17 MBps) [2024-11-17T01:52:16.134Z] Copying: 553/1024 [MB] (13 MBps) [2024-11-17T01:52:17.078Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-17T01:52:18.023Z] Copying: 576/1024 [MB] (12 MBps) [2024-11-17T01:52:18.965Z] Copying: 591/1024 [MB] (14 MBps) [2024-11-17T01:52:20.348Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-17T01:52:21.293Z] Copying: 618/1024 [MB] (12 MBps) [2024-11-17T01:52:22.237Z] Copying: 634/1024 [MB] (15 MBps) [2024-11-17T01:52:23.182Z] Copying: 647/1024 [MB] (13 MBps) [2024-11-17T01:52:24.126Z] Copying: 661/1024 [MB] (13 MBps) [2024-11-17T01:52:25.070Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-17T01:52:26.015Z] Copying: 714/1024 [MB] (41 MBps) [2024-11-17T01:52:27.402Z] Copying: 727/1024 [MB] (12 MBps) [2024-11-17T01:52:27.973Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-17T01:52:29.411Z] Copying: 761/1024 [MB] (21 MBps) [2024-11-17T01:52:30.011Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-17T01:52:31.397Z] Copying: 787/1024 [MB] (14 MBps) [2024-11-17T01:52:31.970Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-17T01:52:33.355Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-17T01:52:34.299Z] Copying: 831/1024 [MB] (21 MBps) [2024-11-17T01:52:35.241Z] Copying: 846/1024 [MB] (15 MBps) [2024-11-17T01:52:36.186Z] Copying: 864/1024 [MB] (17 MBps) [2024-11-17T01:52:37.131Z] Copying: 875/1024 [MB] (10 MBps) [2024-11-17T01:52:38.075Z] Copying: 886/1024 [MB] (10 MBps) [2024-11-17T01:52:39.019Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-17T01:52:40.403Z] Copying: 914/1024 [MB] (16 MBps) [2024-11-17T01:52:40.974Z] Copying: 932/1024 [MB] (17 MBps) [2024-11-17T01:52:42.359Z] Copying: 953/1024 [MB] (21 MBps) [2024-11-17T01:52:43.301Z] Copying: 965/1024 [MB] (12 MBps) [2024-11-17T01:52:44.245Z] Copying: 979/1024 [MB] (13 MBps) [2024-11-17T01:52:45.186Z] Copying: 992/1024 [MB] (13 MBps) [2024-11-17T01:52:46.132Z] Copying: 1006/1024 [MB] (13 MBps) [2024-11-17T01:52:46.706Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-17T01:52:46.706Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 01:52:46.486017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.247 [2024-11-17 01:52:46.486092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:38.247 [2024-11-17 01:52:46.486108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:38.247 [2024-11-17 01:52:46.486118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.247 [2024-11-17 01:52:46.486140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:38.247 [2024-11-17 01:52:46.489605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.247 [2024-11-17 01:52:46.489657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:38.247 [2024-11-17 01:52:46.489669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:32:38.247 [2024-11-17 01:52:46.489678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.247 [2024-11-17 01:52:46.489920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.247 [2024-11-17 01:52:46.489939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:38.247 [2024-11-17 01:52:46.489950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:32:38.247 [2024-11-17 01:52:46.489958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.247 [2024-11-17 01:52:46.489986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.247 [2024-11-17 01:52:46.489995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:38.247 [2024-11-17 01:52:46.490004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:38.247 [2024-11-17 01:52:46.490012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.247 [2024-11-17 01:52:46.490069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.247 [2024-11-17 01:52:46.490079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:38.247 [2024-11-17 01:52:46.490091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:38.247 [2024-11-17 01:52:46.490098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.247 [2024-11-17 01:52:46.490112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:38.247 [2024-11-17 01:52:46.490125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:38.247 [2024-11-17 01:52:46.490136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:38.247 [2024-11-17 01:52:46.490673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.490987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.491087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.491095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.491103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:38.248 [2024-11-17 01:52:46.491119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:38.248 [2024-11-17 01:52:46.491127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2726e5b2-d172-4a51-bda9-409ff1009ced 00:32:38.248 [2024-11-17 01:52:46.491136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:38.248 [2024-11-17 01:52:46.491144] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:32:38.248 [2024-11-17 01:52:46.491151] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:32:38.248 [2024-11-17 01:52:46.491172] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:32:38.248 [2024-11-17 01:52:46.491180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:38.248 [2024-11-17 01:52:46.491191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:38.248 [2024-11-17 01:52:46.491199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:38.248 [2024-11-17 01:52:46.491206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:38.248 [2024-11-17 01:52:46.491212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:38.248 [2024-11-17 01:52:46.491220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.248 [2024-11-17 01:52:46.491228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:38.248 [2024-11-17 01:52:46.491237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:32:38.248 [2024-11-17 01:52:46.491244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.505721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.248 [2024-11-17 01:52:46.505775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:38.248 [2024-11-17 01:52:46.505798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:32:38.248 [2024-11-17 01:52:46.505813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.506202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.248 [2024-11-17 01:52:46.506225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:38.248 [2024-11-17 01:52:46.506234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:32:38.248 [2024-11-17 01:52:46.506242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.542681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.542739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.248 [2024-11-17 01:52:46.542751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.542760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.542843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.542854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.248 [2024-11-17 01:52:46.542864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.542873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.542927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.542939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.248 [2024-11-17 01:52:46.542952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.542960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.542978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.542986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.248 [2024-11-17 01:52:46.542996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.543004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.625967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.626035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:38.248 [2024-11-17 01:52:46.626048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.626056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.694367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.694437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:38.248 [2024-11-17 01:52:46.694449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.694457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.694537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.694548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.248 [2024-11-17 01:52:46.694557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.694572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.694609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.694619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.248 [2024-11-17 01:52:46.694627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.694634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.694712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.248 [2024-11-17 01:52:46.694722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.248 [2024-11-17 01:52:46.694731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.248 [2024-11-17 01:52:46.694740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.248 [2024-11-17 01:52:46.694769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.249 [2024-11-17 01:52:46.694779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:38.249 [2024-11-17 01:52:46.694812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.249 [2024-11-17 01:52:46.694821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.249 [2024-11-17 01:52:46.694860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.249 [2024-11-17 01:52:46.694870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.249 [2024-11-17 01:52:46.694879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.249 [2024-11-17 01:52:46.694887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.249 [2024-11-17 01:52:46.694937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.249 [2024-11-17 01:52:46.694948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.249 [2024-11-17 01:52:46.694957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.249 [2024-11-17 01:52:46.694965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.249 [2024-11-17 01:52:46.695096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 209.042 ms, result 0 00:32:38.821 00:32:38.821 00:32:38.821 01:52:47 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:41.369 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:41.369 Process with pid 81392 is not found 00:32:41.369 Remove shared memory files 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81392 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81392 ']' 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81392 00:32:41.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81392) - No such process 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81392 is not found' 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_band_md /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_l2p_l1 /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_l2p_l2 /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_l2p_l2_ctx /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_nvc_md /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_p2l_pool /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_sb /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_sb_shm /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_trim_bitmap /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_trim_log /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_trim_md /dev/hugepages/ftl_2726e5b2-d172-4a51-bda9-409ff1009ced_vmap 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:41.369 00:32:41.369 real 4m51.755s 00:32:41.369 user 4m39.339s 00:32:41.369 sys 0m12.286s 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.369 01:52:49 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:41.369 ************************************ 00:32:41.369 END TEST ftl_restore_fast 00:32:41.369 ************************************ 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@14 -- # killprocess 72216 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@954 -- # '[' -z 72216 ']' 00:32:41.369 Process with pid 72216 is not found 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@958 -- # kill -0 72216 00:32:41.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72216) - No such process 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72216 is not found' 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84366 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84366 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 84366 ']' 00:32:41.369 01:52:49 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.369 01:52:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:41.369 [2024-11-17 01:52:49.765352] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:41.369 [2024-11-17 01:52:49.765477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84366 ] 00:32:41.630 [2024-11-17 01:52:49.921231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.630 [2024-11-17 01:52:49.995058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.202 01:52:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.202 01:52:50 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:42.202 01:52:50 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:42.464 nvme0n1 00:32:42.464 01:52:50 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:42.464 01:52:50 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:42.464 01:52:50 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:42.725 01:52:51 ftl -- ftl/common.sh@28 -- # stores=77320a84-e645-4271-bfde-f577f3ff9650 00:32:42.725 01:52:51 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:42.725 01:52:51 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77320a84-e645-4271-bfde-f577f3ff9650 00:32:42.987 01:52:51 ftl -- ftl/ftl.sh@23 -- # killprocess 84366 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@954 -- # '[' -z 84366 ']' 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@958 -- # kill -0 84366 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@959 -- # uname 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84366 00:32:42.987 killing process with pid 84366 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84366' 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@973 -- # kill 84366 00:32:42.987 01:52:51 ftl -- common/autotest_common.sh@978 -- # wait 84366 00:32:44.372 01:52:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:44.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:44.372 Waiting for block devices as requested 00:32:44.372 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.372 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.633 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.633 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.922 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:49.922 Remove shared memory files 00:32:49.922 01:52:58 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:49.922 01:52:58 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:49.922 01:52:58 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:49.922 01:52:58 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:49.922 01:52:58 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:49.922 01:52:58 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:49.922 01:52:58 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:49.922 ************************************ 00:32:49.922 END TEST ftl 00:32:49.922 ************************************ 00:32:49.922 00:32:49.922 real 18m24.608s 00:32:49.922 user 20m35.622s 00:32:49.922 sys 1m24.346s 00:32:49.922 01:52:58 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.922 01:52:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:49.922 01:52:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:49.922 01:52:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:49.922 01:52:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:49.922 01:52:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:49.922 01:52:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:49.922 01:52:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:49.922 01:52:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:49.922 01:52:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:49.922 01:52:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:49.922 01:52:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:49.922 01:52:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:49.922 01:52:58 -- common/autotest_common.sh@10 -- # set +x 00:32:49.922 01:52:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:49.922 01:52:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:49.922 01:52:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:49.922 01:52:58 -- common/autotest_common.sh@10 -- # set +x 00:32:51.308 INFO: APP EXITING 00:32:51.308 INFO: killing all VMs 00:32:51.308 INFO: killing vhost app 00:32:51.308 INFO: EXIT DONE 00:32:51.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:52.140 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:52.140 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:52.140 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:52.140 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:52.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:52.973 Cleaning 00:32:52.973 Removing: /var/run/dpdk/spdk0/config 00:32:52.973 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:52.973 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:52.973 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:52.973 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:52.973 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:52.973 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:52.973 Removing: /var/run/dpdk/spdk0 00:32:52.973 Removing: /var/run/dpdk/spdk_pid56981 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57183 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57396 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57489 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57528 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57645 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57663 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57857 00:32:52.973 Removing: /var/run/dpdk/spdk_pid57950 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58046 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58157 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58243 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58282 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58319 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58389 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58468 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58893 00:32:52.973 Removing: /var/run/dpdk/spdk_pid58957 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59009 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59025 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59116 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59132 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59223 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59239 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59292 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59310 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59363 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59381 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59536 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59572 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59656 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59822 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59906 00:32:52.973 Removing: /var/run/dpdk/spdk_pid59943 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60375 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60467 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60584 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60637 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60657 00:32:52.974 Removing: /var/run/dpdk/spdk_pid60741 00:32:52.974 Removing: /var/run/dpdk/spdk_pid61358 00:32:52.974 Removing: /var/run/dpdk/spdk_pid61395 00:32:52.974 Removing: /var/run/dpdk/spdk_pid61854 00:32:52.974 Removing: /var/run/dpdk/spdk_pid61952 00:32:52.974 Removing: /var/run/dpdk/spdk_pid62062 00:32:52.974 Removing: /var/run/dpdk/spdk_pid62115 00:32:52.974 Removing: /var/run/dpdk/spdk_pid62135 00:32:52.974 Removing: /var/run/dpdk/spdk_pid62166 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64001 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64138 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64142 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64154 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64200 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64204 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64216 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64261 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64265 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64277 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64322 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64326 00:32:52.974 Removing: /var/run/dpdk/spdk_pid64338 00:32:52.974 Removing: /var/run/dpdk/spdk_pid65703 00:32:52.974 Removing: /var/run/dpdk/spdk_pid65800 00:32:52.974 Removing: /var/run/dpdk/spdk_pid67202 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68581 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68663 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68751 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68827 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68925 00:32:52.974 Removing: /var/run/dpdk/spdk_pid68996 00:32:52.974 Removing: /var/run/dpdk/spdk_pid69139 00:32:52.974 Removing: /var/run/dpdk/spdk_pid69497 00:32:52.974 Removing: /var/run/dpdk/spdk_pid69528 00:32:53.236 Removing: /var/run/dpdk/spdk_pid69979 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70159 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70258 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70373 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70415 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70446 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70736 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70796 00:32:53.236 Removing: /var/run/dpdk/spdk_pid70869 00:32:53.236 Removing: /var/run/dpdk/spdk_pid71267 00:32:53.236 Removing: /var/run/dpdk/spdk_pid71412 00:32:53.236 Removing: /var/run/dpdk/spdk_pid72216 00:32:53.236 Removing: /var/run/dpdk/spdk_pid72343 00:32:53.236 Removing: /var/run/dpdk/spdk_pid72512 00:32:53.236 Removing: /var/run/dpdk/spdk_pid72626 00:32:53.236 Removing: /var/run/dpdk/spdk_pid72924 00:32:53.236 Removing: /var/run/dpdk/spdk_pid73177 00:32:53.236 Removing: /var/run/dpdk/spdk_pid73535 00:32:53.236 Removing: /var/run/dpdk/spdk_pid73724 00:32:53.236 Removing: /var/run/dpdk/spdk_pid73877 00:32:53.236 Removing: /var/run/dpdk/spdk_pid73930 00:32:53.236 Removing: /var/run/dpdk/spdk_pid74139 00:32:53.236 Removing: /var/run/dpdk/spdk_pid74170 00:32:53.236 Removing: /var/run/dpdk/spdk_pid74228 00:32:53.236 Removing: /var/run/dpdk/spdk_pid74536 00:32:53.236 Removing: /var/run/dpdk/spdk_pid74766 00:32:53.236 Removing: /var/run/dpdk/spdk_pid75200 00:32:53.236 Removing: /var/run/dpdk/spdk_pid75946 00:32:53.236 Removing: /var/run/dpdk/spdk_pid76836 00:32:53.236 Removing: /var/run/dpdk/spdk_pid77667 00:32:53.236 Removing: /var/run/dpdk/spdk_pid77809 00:32:53.236 Removing: /var/run/dpdk/spdk_pid77899 00:32:53.236 Removing: /var/run/dpdk/spdk_pid78507 00:32:53.236 Removing: /var/run/dpdk/spdk_pid78561 00:32:53.236 Removing: /var/run/dpdk/spdk_pid79141 00:32:53.236 Removing: /var/run/dpdk/spdk_pid79621 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80381 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80498 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80545 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80609 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80662 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80726 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80906 00:32:53.236 Removing: /var/run/dpdk/spdk_pid80987 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81055 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81116 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81151 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81212 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81392 00:32:53.236 Removing: /var/run/dpdk/spdk_pid81617 00:32:53.236 Removing: /var/run/dpdk/spdk_pid82246 00:32:53.236 Removing: /var/run/dpdk/spdk_pid83060 00:32:53.236 Removing: /var/run/dpdk/spdk_pid83673 00:32:53.236 Removing: /var/run/dpdk/spdk_pid84366 00:32:53.236 Clean 00:32:53.236 01:53:01 -- common/autotest_common.sh@1453 -- # return 0 00:32:53.236 01:53:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:53.236 01:53:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.236 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:32:53.498 01:53:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:53.498 01:53:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.498 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:32:53.498 01:53:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:53.498 01:53:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:53.498 01:53:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:53.498 01:53:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:53.498 01:53:01 -- spdk/autotest.sh@398 -- # hostname 00:32:53.498 01:53:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:53.498 geninfo: WARNING: invalid characters removed from testname! 00:33:20.136 01:53:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.680 01:53:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:24.590 01:53:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.133 01:53:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:28.516 01:53:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.055 01:53:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:33.595 01:53:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:33.595 01:53:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:33.595 01:53:41 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:33.595 01:53:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:33.595 01:53:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:33.595 01:53:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:33.595 + [[ -n 5025 ]] 00:33:33.595 + sudo kill 5025 00:33:33.605 [Pipeline] } 00:33:33.617 [Pipeline] // timeout 00:33:33.623 [Pipeline] } 00:33:33.637 [Pipeline] // stage 00:33:33.643 [Pipeline] } 00:33:33.657 [Pipeline] // catchError 00:33:33.665 [Pipeline] stage 00:33:33.668 [Pipeline] { (Stop VM) 00:33:33.681 [Pipeline] sh 00:33:33.966 + vagrant halt 00:33:36.508 ==> default: Halting domain... 00:33:41.819 [Pipeline] sh 00:33:42.103 + vagrant destroy -f 00:33:44.647 ==> default: Removing domain... 00:33:45.229 [Pipeline] sh 00:33:45.509 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:45.518 [Pipeline] } 00:33:45.531 [Pipeline] // stage 00:33:45.535 [Pipeline] } 00:33:45.548 [Pipeline] // dir 00:33:45.552 [Pipeline] } 00:33:45.566 [Pipeline] // wrap 00:33:45.571 [Pipeline] } 00:33:45.583 [Pipeline] // catchError 00:33:45.591 [Pipeline] stage 00:33:45.593 [Pipeline] { (Epilogue) 00:33:45.604 [Pipeline] sh 00:33:45.880 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:51.166 [Pipeline] catchError 00:33:51.168 [Pipeline] { 00:33:51.183 [Pipeline] sh 00:33:51.537 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:51.537 Artifacts sizes are good 00:33:51.546 [Pipeline] } 00:33:51.560 [Pipeline] // catchError 00:33:51.570 [Pipeline] archiveArtifacts 00:33:51.578 Archiving artifacts 00:33:51.702 [Pipeline] cleanWs 00:33:51.714 [WS-CLEANUP] Deleting project workspace... 00:33:51.714 [WS-CLEANUP] Deferred wipeout is used... 00:33:51.720 [WS-CLEANUP] done 00:33:51.722 [Pipeline] } 00:33:51.736 [Pipeline] // stage 00:33:51.741 [Pipeline] } 00:33:51.753 [Pipeline] // node 00:33:51.757 [Pipeline] End of Pipeline 00:33:51.795 Finished: SUCCESS